当前位置:主页 > 资料 >

Plaid.com’s Monitoring System for 9600+ Integrations
栏目分类:资料   发布日期:2018-08-02   浏览次数:

导读:本文为去找网小编(www.7zhao.net)为您推荐的Plaid.com’s Monitoring System for 9600+ Integrations,希望对您有所帮助,谢谢! Plaid.com - a financial technology company - has integrations with over 9600 financial insti

本文为去找网小编(www.7zhao.net)为您推荐的Plaid.com’s Monitoring System for 9600+ Integrations,希望对您有所帮助,谢谢! 欢迎访问www.7zhao.net



Plaid.com - a financial technology company - has integrations with over 9600 financial institutions, from which it that they can later use. Monitoring these integrations is a challenge due to the heterogeneous nature of the integrations multiplied by their number. The same metrics have different interpretations in different integrations, and the metrics to alert on also differ. They on AWS Kinesis, Prometheus, Alertmanager andGrafana to solve the challenges of scalability and low latency. 欢迎访问www.7zhao.net

Plaid's previous monitoring system depended heavily on their logging system based onElasticsearch (ES). Nagios queried the ES cluster and forwarded any alerts to PagerDuty. Along with a lack of customizability, this system could not scale to increasing traffic, as ES's retention period decreased due to the increased size of logs. The lack of a historical view of metrics, manual configuration of alerts, and a fragile dependency on logging changes led the team to rethink their approach to monitoring. They moved on to analyzing their requirements - what to monitor and how, in the context of their specific use case. Functional requirements included prioritizing metrics based on customer impact and instrumentation costs, whereas technical ones focused on scalability, low latency queries, support for high cardinality and ease of use for developers to use the system. www.7zhao.net

The team decided onPrometheus as the time series database,Kinesis as the event stream processor, Alertmanager for alerting, and Grafana for visualization. The last three were chosen as they were flexible and Prometheus and Grafana worked well with each other. They designed the monitoring pipeline so that both standard and custom components could pull data from it and generate metrics. Services exporting standard metrics could just use the standard pipeline, whereas others send events to Kinesis, from which an event consumer pulls the events and generates metrics. Both of these end up as metrics at Prometheus, and the rest of the pipeline is identical from then on. Events typically take less than 5 seconds to become metrics.

欢迎访问www.7zhao.net

- a part of the Prometheus project - has a file based configuration. Can this potentially become a challenge to maintain if the rate of new integrations (and thus new metrics) increases? InfoQ got in touch with , Software Engineer at Plaid, to find out more.

www.7zhao.net

Hand-crafted configuration files for alertmanager have not been a big issue because we can set rules based on alert categories rather than individual alerts (for example, a rule which notifies Pagerduty for any high-priority alerts and Slack for lower-priority alerts). On the other hand, the Prometheus configuration has definitely been a challenge for us due to having such a large number of integrations. The initial monitoring implementation relied on hand-crafted configuration files, but a follow-up project was building tooling to generate config files from JS code instead of copy-pasting per-integration rules.
 

本文来自去找www.7zhao.net

The team seems to have made good progress on the ease of use goal as 31 out of a team of 45 engineers have contributed to the monitoring config. The standard pipeline does not need any instrumentation - libraries shared across the codebase automatically export metrics. Zheng elaborated on how they standardize metric conventions:

本文来自去找www.7zhao.net

Shared libraries help enforce common metric naming, since in those cases, the libraries control the naming, and all the calling service needs to do is specify a label for itself. Using protobuf enum values for some labels has helped us standardize there, too. However, we don’t yet have strong naming conventions for custom per-service metrics, and it is hard for someone to discover metrics in prometheus without already knowing what they are. Our current solution for discoverability has mostly been to build Grafana dashboards with the most important per-service prometheus metrics.
 
www.7zhao.net

Prometheus - which runs in a at Plaid - has of metrics. However, this was not a challenge where historical data is concerned, says Zheng, because "our initial Prometheus usage focused on immediate alerting, so only having a few months of history was not a big issue. We have found more use cases for historical analysis of metrics over time, and recently shipped a follow-up project which exports Prometheus metrics to our long-term data warehouse (in AWS Redshift)". 内容来自www.7zhao.net

Streaming data can arrive or late at the consumer, due to network latencies or reordering. Kinesis handles this in Plaid’s case, says Zheng: 去找(www.7zhao.net欢迎您

Using Kinesis lets us maintain ordering even when the Kinesis consumer goes down. We have seen the event consumer lag for a few minutes due to latency and then spike to catch up, which ends up causing 1-2 spurious pages. Another benefit of using Kinesis is being able to have parallel readers, so we also have a parallel "preproduction" monitoring environment reading from the same event stream where we test monitoring changes at full scale. As a result, we've generally seen very good stability from the event consumer.
 

去找(www.7zhao.net欢迎您

Monitoring also plays a part in the deployment pipeline, where code is pushed to an internal staging environment first before pushing to production. The current workflow at Plaid often involves developers checking dashboards (including monitoring metrics) before promoting a deploy to subsequent environments.

copyright www.7zhao.net

本文来自去找www.7zhao.net


本文原文地址:http://www.infoq.com/news/2018/08/plaid-monitoring-scaling

以上为Plaid.com’s Monitoring System for 9600+ Integrations文章的全部内容,若您也有好的文章,欢迎与我们分享!

www.7zhao.net

Copyright ©2008-2017去找网版权所有   皖ICP备12002049号-2 皖公网安备 34088102000435号   关于我们|联系我们| 免责声明|友情链接|网站地图|手机版