You are browsing documentation for an older version. See the latest documentation here.
Monitoring with Prometheus
Prometheus is a popular systems monitoring and alerting toolkit. Prometheus implements a multi-dimensional time series data model and distributed storage system where metrics data is collected via a pull model over HTTP.
Kong Gateway supports Prometheus with the Prometheus Plugin that exposes
Kong Gateway performance and proxied upstream service metrics on the /metrics
endpoint.
This guide will help you setup a test Kong Gateway and Prometheus service. Then you will generate sample requests to Kong Gateway and observe the collected monitoring data.
Prerequisites
This guide assumes the following tools are installed locally:
- Docker is used to run Kong Gateway, the supporting database, and Prometheus locally.
-
curl is used to send requests to Kong Gateway.
curl
is pre-installed on most systems.
Configure Prometheus monitoring
-
Install Kong Gateway:
This step is optional if you wish to use an existing Kong Gateway installation. When using an existing Kong Gateway, you will need to modify the commands to account for network connectivity and installed Kong Gateway services and routes.
curl -Ls https://get.konghq.com/quickstart | bash -s -- -m
The
-m
flag instructs the script to install a mock service that is used in this guide to generate sample metrics.Once the Kong Gateway is ready, you will see the following message:
Kong Gateway Ready
-
Install the Prometheus Kong Gateway plugin:
curl -s -X POST http://localhost:8001/plugins/ \ --data "name=prometheus"
You should receive a JSON response with the details of the installed plugin.
-
Create a Prometheus configuration file named
prometheus.yml
in the current directory, and copy the following values:scrape_configs: - job_name: 'kong' scrape_interval: 5s static_configs: - targets: ['kong-quickstart-gateway:8001']
See the Prometheus Configuration Documentation for details on these settings.
-
Run a Prometheus server, and pass it the configuration file created in the previous step. Prometheus will begin to scrape metrics data from Kong Gateway.
docker run -d --name kong-quickstart-prometheus \ --network=kong-quickstart-net -p 9090:9090 \ -v $(PWD)/prometheus.yml:/etc/prometheus/prometheus.yml \ prom/prometheus:latest
-
Generate sample traffic to the mock service. This allows you to observe metrics generated from the StatsD plugin. The following command generates 60 requests over one minute. Run the following in a new terminal:
for _ in {1..60}; do {curl -s localhost:8000/mock/anything; sleep 1; } done
-
You can view the metric data directly from Kong Gateway by querying the
/metrics
endpoint on the Admin API:curl -s localhost:8001/metrics
Kong Gateway will report system wide performance metrics by default. When the Plugin has been installed and traffic is being proxied, it will record additional metrics across service, route, and upstream dimensions.
The response will look similar to the following snippet:
# HELP kong_bandwidth Total bandwidth in bytes consumed per service/route in Kong # TYPE kong_bandwidth counter kong_bandwidth{service="mock",route="mock",type="egress"} 13579 kong_bandwidth{service="mock",route="mock",type="ingress"} 540 # HELP kong_datastore_reachable Datastore reachable from Kong, 0 is unreachable # TYPE kong_datastore_reachable gauge kong_datastore_reachable 1 # HELP kong_http_status HTTP status codes per service/route in Kong # TYPE kong_http_status counter kong_http_status{service="mock",route="mock",code="200"} 6 # HELP kong_latency Latency added by Kong, total request time and upstream latency for each service/route in Kong # TYPE kong_latency histogram kong_latency_bucket{service="mock",route="mock",type="kong",le="1"} 4 kong_latency_bucket{service="mock",route="mock",type="kong",le="2"} 4 ...
See the Kong Prometheus Plugin documentation for details on the available metrics and configurations.
-
Prometheus provides multiple ways to query collected metric data.
You can view the Prometheus expression viewer by opening a browser to http://localhost:9090/graph.
You can also query Prometheus directly using it’s HTTP API:
curl -s 'localhost:9090/api/v1/query?query=kong_http_status'
Prometheus also provides documentation for setting up Grafana as a visualization tool for the collected time series data.
Cleanup
Once you are done experimenting with Prometheus and Kong Gateway, you can use the following commands to stop and remove the services created in this guide:
docker stop kong-quickstart-prometheus
curl -Ls https://get.konghq.com/quickstart | bash -s -- -d
More information
- How to monitor with StatsD provides a guide to using StatsD for monitoring with the Kong Gateway Plugin
- See the Tracing API Reference for information on Kong Gateway’s tracing capabilities