Work-Centric Database Management Metrics: Q&A

Posted by Alex Slotnick on Sep 15, 2016 4:04:11 PM

A few weeks ago, we teamed up with our friends at Datadog to host a webinar, “5 Tips on Determining the Most Impactful Metrics in Your App.” One of the best moments from the event was when Datadog’s Matt Williams and VividCortex’s Preetam Jinka set aside some time for a session of Q&A, as they took turns answering both each other’s questions and questions from the audience.

In this blog post, we've transcribed and excerpted some key moments from that conversation. We hope you find Preetam and Matt's insights valuable and interesting. Also remember, if you want to experience the webinar in full, you can always watch a recording here, which includes prepared presentations with slides and extended, organic conversation, like the dialogues below. 

Q: How do your monitoring solutions help users find problems?

Matt: As soon as you install the Datadog agents on each of your hosts and turn on different integrations, we’re going to give you a lot of metrics. You’re going to look at those work metrics to find issues or problems that are going on in your environment. Maybe there’s nothing going on today, but at some point in the future you’ll see some sort of problem. We provide users with a lot of dashboards, too. As soon as you install any of the integrations, you get a dashboard along with it for free, and then when you start working with that dashboard you realize, well, there are these two graphs that I don’t really care about. And there are two other graphs I wish were there — or 10 other graphs I wish were there. So then you go in and tweak it… you use those dashboards to figure out what’s going wrong; you’ll correlate events with metrics, and that will help you figure out the overall problem. What about the VividCortex?

Preetam: We also provide dashboards and events, but I think what really sets VividCortex apart is that because we’re looking at individual query work lists, we try to provide that information front and center. We have a tool called the Profiler, which I posted screenshots of [earlier in the webinar]. The Profiler really lets users rank different categories of items, like queries, databases, or database users by many different dimensions.


So you can view the top ten queries by total execution time or total throughput. Or even the top 10 queries by errors, or missing indexes, or slow queries, or poor indexes. That way we emphasize that if you’re interested in optimizing queries that are missing indexes, we present it very clearly: “Here are the top ten queries that are missing indexes. Those are the ones you should focus on first.”

Q: As a user, how much should I know about my systems to use each App?

Matt: For Datadog we definitely want to provide everything that the typical customer needs in order to be able to make a well-informed decision. Going back to that Peter Drucker quote, basically: “What you master is what you can make better.” I think we’re there to provide a lot of the information, but it’s up to you as a user of Datadog to go in there and use this information in the right way.... How they relate to your overall business goals, because, again, the business goals are what are most important.

Preetam: I think we have the same general approach. We show a lot of information that people are probably going to be interested in — we have a bunch of pre-set dashboards — but it’s really up to the user to see them, based on what they know of their own databases... But I think if you’re a brand new user who doesn’t know too much about databases, our tool does a great job of picking out relevant information. If you’re new to MySQL you might not know how to fetch that information. For instance, how would you know if a query is using an index or not? We can provide that information without you having to go and open up a shell and figure out, “Okay, how do I look at this thing and find something out?” We provide that through a user interface that people can use without knowing too much about MySQL or Postgres or something like that.

Matt: I’ve actually used the Datadog interface to help me focus my learning efforts. I want to learn about a new technology or a technology where we’re using at Datadog, one of the things I’ll do is look at the dashboard, figure out what the 10 or 15 most important metrics are, according to who wrote the dashboard, and then I’ll start researching.

Q: How well does each service work with a modern microservices architecture?

Matt: At Datadog, we collect a lot of information, and we don’t really mind where that information is coming from. Really, all that’s necessary is installing a python-based agent. We can then install that agent on a physical box, or on our virtual machine. When it comes to Docker, we can install that agent as a sidecar container. Once we have that, we collect all the metrics from all the other containers that are on the host — you basically install one agent in a container for each of the hosts that are hosting all your containers. Now, if your definition of microservices extends beyond just Docker and containers, extends to server list architectures, then we have a solution there as well, for collecting data about your application that might be running in, say, AWS Lambda.

Preetam: I think what’s coming to a lot of microservices is that each service might have its own database or its own flavor of database. I think what we do really well is provide one dashboard where we can monitor lots of different kinds of databases. If you have a MongoDB instance for one micro-service, another micro-service that’s using MySQL will show queries from both on a single page. And, of course, you can drill down and separate the two out. But to get an idea of all the queries being run on entire data layer, I think VividCortex does a pretty good job of obstructing out the database specific components and then just view the database work as generic queries. They don’t just have to be SQL — MongoDB is a NoSQL database — but we can treat all those in a similar way, like using this idea of a query and using that as a unit of work.

Matt: Actually I didn’t realize that — when I open up your tool, I’ll be able to see how all my database queries are doing, across all the different databases I’m using, all in one dashboard. All in one view.

Preetam: Right. As on a summary page, you can see how many queries are executing, in every single type of database.

Q: Do you only monitor what you want optimized or do you monitor everything?

Preetam: I definitely think it’s important to monitor everything. We talked [earlier in the webinar] about learning more from our monitoring systems, and if you just pick out certain things that you want to monitor, then you’re going to miss out on all the things that you don’t know about. So it’s definitely important to monitor everything, and I think it’s especially important to monitor the things that you’re trying to optimize. Of course there are always more metrics you can add, and sometimes it just might get too expensive to monitor millions of metrics. But you should especially try to monitor everything, and, based on what you’re trying to optimize, I would add on metrics, that sort of reflect whatever you want to optimize.










Remember, if you want to check out the full recording of the webinar, you can always access it here. And if you want to see some of the VividCortex features that Preetam has discussed, the easiest and quickest way is to request a free trial, so you can see what our solutions look like when applied to your organization's own systems. 

Recent Posts

Posts by Topic

see all