Monitoring APIs with ELK

The Basics

One of the main challenges we’ve dealt with during the last couple of years, was opening our platform and recommendation engine to the developers’ community. With the amount of data that Outbrain processes, direct relations with hundreds of thousands of sites and reach of more than 600M users a month, we can drive the next wave of content innovation. One of Outbrain’s main drivers for enabling automated large scale recommendations system is to provide application developers the option to interact with our system via API.

Developers build applications, and those application are used by users, in different locations and times. When exposing API to external usage you can rarely predict how people will actually use it

These variations can come from different reasons:

  1. Unpredictable scenarios
  2. Unintentional misuse of the API. Either for lack of proper documentation, a bug, or simply because a developer didn’t RTFM.
  3. Intentional misuse of the API. Yeah, you should expect people will abuse your API or use it for fraudulent activity.

In all those cases, we need to know how the developers community is using the APIs and how the end users (applications) are using it as well and also take proactive measures.

Hello ELK.

The Stack

The Stack

ElasticSearch, Logstash and Kibana (AKA ELK) are great tools for collecting, filtering, processing, indexing and searching through logs. The setup is simple: Our service writes logs (using Log4J), the logs are picked up by a Logstash agent that sent it to an ElasticSearch  index. Kibana is setup to visualize the data of the ES index.

The Data

Web server logs are usually too generic. Application debug logs are usually too noisy. In our case, we have added a dedicated log with a single line for every API request. Since we’re in application code, we can enrich the log with interesting fields, like country of request origin (translating the IP to country). etc…

Here’s a list of useful fields:

  • Request IP  – Don’t forget about XFF header
  • Country / City – We use a 3rd party database for translating IPs to the country.
  • Request User-Agent
  • Request Device Type – Resolved from the User-Agent
  • Request Http Method – GET, POST, etc.
  • Request Query Parameters
  • Request URL
  • Response Http Status – code. 200, 204, etc.
  • Response Error Message – The API service can fill in extra details on errors.
  • Developer Identifier / API Key – If you can identify the Developer, Application or User, add these fields.

What can you get out of this?

So we’ve got the data in ES, now what?

Obvious – Events over time

Obvious - Events over time

This is pretty trivial. You want to see how many requests are made. With Kibana’s ® slice ‘n dice capabilities, you can easily break it down per Application, Country, or any other field that you’ve bothered to add. In case an application is abusing your API and calling it a lot, you can see who just jumped over time with his requests and handle it.

Request Origin

Request Origin

If you’re able to resolve the request IP (or XFF header IP) to country, you’ll get a cool looking map / table and see where requests are coming from. This way you can detect anomalies like frauds etc…

Http Status Breakdown

Http Status Breakdown

By itself, this is nice to have. When combined with Kibana’s slice n’ dice capabilities this let’s you see an overview for any breakdown. In many cases you can see that an application/developer is shooting the wrong API call. Be proactive and lend some assistance in near real time. Trust us, they’ll be impressed.

IP Diversity

IP Diversity

Why would you care about this? Consider the following: A developer creates an application using your API, but all requests are made from a limited number of IPs. This could be intentional, for example, if all requests are made through some cloud service. This could also hint on a bug in the integration of the API. Now you can investigate.

Save the Best for Last

The data exists in ElasticSearch. Using Kibana is just one way of using it. Here are a few awesome ways to use the data.

Automated Validations (or Anomaly detection)

Once we’ve identified key anomalies in API usage, we’ve setup automated tests to search for these anomalies on a daily basis. Automatic anomaly detection in API usage proved to be incredibly useful when scaling a product. These tests can be run on demand or scheduled, and a daily report is produced.

Automated Validations (or Anomaly detection)

Abuse Detection

ElasticSearch is (as the name suggests) very elastic. It enables querying and aggregating the data in a variety of ways. Security experts can (relatively) easily slice & dice the data to find abuse patterns. For example, we detect when the same user-id is used in two different locations and trigger an alert.

Key Takeaways

  • Use ELK for analyzing your API usage
  • Have the application write the events (not a generic web-server).
  • Provide application-level information. E.g. Additional error information, Resolved geo location.
  • Share the love
About Roy Bass

Just a regular geek. Star Wars, Gaming, Java, etc.


1 Comments

Leave a Reply

Your email address will not be published.