Monitoring tools offer two core types of functionality: alerts based on aliveness checks and comparing metrics to thresholds, and displaying time-series charts of status counters. Nagios + Graphite are the prototypical time-series tools that do these things.
But these tools don't answer the crucial questions about what we should monitor. What kinds of aliveness/health checks should we build into Nagios? Which metrics should we monitor with thresholds to raise alarms, and what should the thresholds be? What graphs should we build of status counters, which graphs should we examine and what do they mean?
We need guiding principles to help answer these questions. This webinar briefly introduces the principles that motivate and inform what we do at VividCortex, then explains which types of health checks and charts are valuable and what conclusions should be drawn from them. The webinar is focused mostly on MySQL database monitoring, but will be relevant beyond MySQL as well. Some of the questions we answer are:
- What status counters from MySQL are central and core, and which are peripheral?
- What is the meaning of MySQL status metrics?
- Which subsystems inside MySQL are the most common causes of problems in production?
- What is the unit of work-getting-done in MySQL, and how can you measure it?
- Which open-source tools do a good job at monitoring in the way we recommend at VividCortex?
- Which new and/or popular open-source tools should you evaluate when choosing a solution?
You will leave this webinar with a solid understanding of the types of monitoring you should be doing, the low-hanging fruit, and tools for doing it. This is not just a sales pitch for VividCortex. Register below, and we will send you a link to the recording and a copy of the slide deck.For many companies, VividCortex included, Big Data is not merely a component of the business. It is the business. Managing the crushing volume and velocity of data requires planning from the foundations of the infrastructure. A radically new approach is an IT structure in which the data log serves as the backbone.