Log Management With ELK and Why You Should Care

Editor’s note: This post was originally written for the ASPE blog. You can check out the original here.


Sometimes, small things can cause big problems. You make a move that you think is going to make things easier, but while that can happen, it can also make things harder.

That’s the case with microservices. You know the benefits of moving from a monolithic architecture to a microservices architecture. They include more flexibility and faster deployments.

This is great until you remember all the logging you were doing with your monolithic app. All of that log data could be found on one or a few systems that made up the app. But in a microservices architecture, all of your application services are logging data on every instance.

You need a log management tool that can help you collect and store this data from all these services. That way, you can search for and analyze the data you need when you need it.

This is where the Elastic Stack comes in to save your day. In this post, let’s discuss how it does this—and why you should care.

What’s the ELK Stack?

The ELK Stack is the more common name for a suite of components used to monitor and observe your infrastructure. You can use the ELK Stack to manage all of your log data and analyze everything in real time.

The acronym “ELK” stands for the original three open source components that made up the stack: Elasticsearch, Logstash, and Kibana. Logstash ingests and collects all of that log data coming from various sources and sends it over to Elasticsearch. Once it receives the data from Logstash, Elasticsearch indexes it all so you can search and analyze it. Using a querying language, you can use Kibana to pull the data out of Elasticsearch for visualization, dashboarding, and more.

The Elastic Stack is the next iteration of the ELK Stack. It incorporates another component, Beats, which can ingest data similarly to Logstash but not as comprehensively. Beats is more lightweight.

I guess they could have called it KELB or BELK, but it’s just called the Elastic Stack. Now, let’s dig into each component a little more.

Join the Elasticsearch Party

You have data from your microservices that you need to search. Just use Elasticsearch. It’s the original of the four components initially created for searching any data you could get into it.

All data that Elasticsearch receives is indexed and stored for searching and analysis. The data gets stored in Apache Lucene indices as an inverted index, which makes it faster to search. It can also be implemented in a distributed fashion while receiving HTTP requests via the REST API.

Whether you’re running Linux, Windows, or Mac, you can use Elasticsearch for your searching adventures. You can install it on various server instances running Linux distributions, like Debian, Red Hat, OS X, and Windows.

Give Me the Logstash

Because it’s mainly a search engine, Elasticsearch is used to store many different types of data. But log data has become the most common, in large part because of Logstash.

With Logstash, you can collect data from multiple sources. Data from all these sources is collected in real time into a pipeline. This data can then be transformed if needed and sent to any number of output sources.

Using a plugin architecture, Logstash comes with over 200 built-in plugins for inputting, filtering, and outputting the data it ingests. It has plugins that let it input data from logs from Java applications, weblogs, and HTTP requests. Logstash can then use the various filter plugins to let it aggregate, rename, or drop data from your inputs. And finally, once you’ve altered your inputted data, you can use the many output plugins to ship them to, among others, Kafka for buffering or Elasticsearch, “you know, for search.”

Drop Some Beats

You can probably already guess that with Logstash receiving, transforming, and sending all of this data, you can expect performance issues. The community around the ELK Stack developed some alternatives, and one of those became Beats.

Beats is a collection of agents that are installed on your system to collect a specific set of information. There are currently eight types of Beats that Elastic makes, but the open source community has developed many others. Because each beat has a specified function, it can be lightweight and take up fewer resources than Logstash.

Each beat can collect varying types of data. There’s Filebeat for log files, Functionbeat for serverless cloud infrastructure, Metricbeat for system metrics, and Packetbeat for network packets.

Get the Kibana

Because Elasticsearch is a REST API, you can send HTTP requests to it directly. Want to see a list of all your indices? You can open your browser and go to http://:<Elasticsearch IP:>9200/_cat/indices?v, or use command line with curl GET /_cat/indices?v. Either one will display a list of indices in your Elasticsearch deployment.

Kibana lets you do this and more. Using the Kibana Query Language (KQL), you retrieve this type of information from Elasticsearch and then graph it. You can put multiple pieces of data from numerous sources, whether it came from Logstash or Beats, into a dashboard. Then you can share that dashboard with your team or email scheduled reports.

Kibana opens up the possibilities of what you can do with all the data you’ve collected from your microservices instances.

Why Care?

One of the challenges of choosing to deploy a new solution is a key barrier to entry: cost. Many monitoring solutions, whether for logs or otherwise, have a cost. For those of you in small teams or small organizations, that may not be an option. And even if it was, do you want to spend the money only to find out later that this solution isn’t exactly the right one for you? Probably not.

You can tier your Elastic Stack deployment if the cost is an obstacle. There are three reasons why you should care about deploying it and using it to monitor and observe your infrastructure.

It’s Interoperable

The ELK Stack is a stack of four components. You can mix and match with many other existing tools you have doing what one in the stack already does. The stack can work together or broken up. It can interoperate with other tools to get the job done.

Are you using Fluentd for logging data? Use that in place of Logstash or Beats, if needed. Send your data directly to Elasticsearch and Kibana.

Are you already using Grafana for visualizations and dashboarding? Use that in place of Kibana, if needed. Retrieve data from Elasticsearch right in Grafana to create your graphs and charts.

It’s Open Source

The Elastic Stack is a collection of open source projects. If you want to deploy it yourself to test it out before having to shell out money for it, you can. Obviously, there’s the time that you’ll spend to get knowledgeable about the stack, but you can avoid any upfront cost.

All you have to do is download each of the components you want to implement and install in your infrastructure. You can start monitoring your infrastructure today, without having to wait for a purchase order or any purchase approval.

It’s Managed

If you don’t have the time or don’t want to spend any time learning the ELK Stack, you have options for paying someone to manage things for you. The company that makes Elastic Stack has its own managed service you can pay for. AWS has its own Elasticsearch service. There are also other providers with hosted Elastic Stack options.

If you have the money or don’t want to spend any time maintaining your ELK Stack, you can subscribe to a managed service and reduce your time to value.

Parting Thoughts

If your journey into microservices has you trying to figure out log management all over again, you need to look at a log monitoring solution like the ELK Stack.  There are many use cases with ELK, and you can break your deployment up into small parts if needed.

But don’t let these small things turn into big problems. Without a plan, you can easily implement an ELK solution that causes more problems than it solves. Understand your team’s needs and put a plan together for implementation. Whether that plan includes taking some parts of the stack, going open source, or letting someone else manage it, make sure it fits your organization’s specific needs for success.

Scroll to Top