You are browsing documentation for an older version. See the latest documentation here.
Using Redis for rate limiting
Kong can rate limit traffic without any external dependency. Kong stores the request counters in-memory and each Kong node applies the rate limiting policy independently without synchronization of information. However, if Redis is available in your cluster, Kong can take advantage of it and synchronize the rate limit information across multiple Kong nodes and enforce a slightly different rate limiting policy.
Learn to use Redis for rate limiting in a multi-node Kong deployment.
You can use the Kong Gateway Enterprise Secrets Management feature along with the example rate-limiting plugin. If you have an existing plugin that you wish to use Secrets Management with, you can skip directly to the Secrets Management section and use it for your plugin instead of the example rate-limiting plugin.
Before you begin ensure that you have Installed Kong Ingress Controller with Gateway API support in your Kubernetes cluster and are able to connect to Kong.
Before you begin ensure that you have Installed Kong Ingress Controller with Gateway API support in your Kubernetes cluster and are able to connect to Kong.
Prerequisites
Install the Gateway APIs
-
Install the Gateway API CRDs before installing Kong Ingress Controller.
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/standard-install.yaml
-
Create a
Gateway
andGatewayClass
instance to use.echo " --- apiVersion: gateway.networking.k8s.io/v1beta1 kind: GatewayClass metadata: name: kong annotations: konghq.com/gatewayclass-unmanaged: 'true' spec: controllerName: konghq.com/kic-gateway-controller --- apiVersion: gateway.networking.k8s.io/v1beta1 kind: Gateway metadata: name: kong spec: gatewayClassName: kong listeners: - name: proxy port: 80 protocol: HTTP " | kubectl apply -f -
The results should look like this:
gatewayclass.gateway.networking.k8s.io/kong created gateway.gateway.networking.k8s.io/kong created
Install Kong
You can install Kong in your Kubernetes cluster using Helm.
-
Add the Kong Helm charts:
helm repo add kong https://charts.konghq.com helm repo update
-
Install Kong Ingress Controller and Kong Gateway with Helm:
helm install kong kong/ingress -n kong --create-namespace
Test connectivity to Kong
Kubernetes exposes the proxy through a Kubernetes service. Run the following commands to store the load balancer IP address in a variable named PROXY_IP
:
-
Populate
$PROXY_IP
for future commands:export PROXY_IP=$(kubectl get svc --namespace kong kong-gateway-proxy -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo $PROXY_IP
-
Ensure that you can call the proxy IP:
curl -i $PROXY_IP
The results should look like this:
HTTP/1.1 404 Not Found Content-Type: application/json; charset=utf-8 Connection: keep-alive Content-Length: 48 X-Kong-Response-Latency: 0 Server: kong/3.0.0 {"message":"no Route matched with those values"}
Deploy an echo service
To proxy requests, you need an upstream application to send a request to. Deploying this echo server provides a simple application that returns information about the Pod it’s running in:
kubectl apply -f https://docs.konghq.com/assets/kubernetes-ingress-controller/examples/echo-service.yaml
The results should look like this:
service/echo created
deployment.apps/echo created
Create a configuration group
Ingress and Gateway APIs controllers need a configuration that indicates which set of routing configuration they should recognize. This allows multiple controllers to coexist in the same cluster. Before creating individual routes, you need to create a class configuration to associate routes with:
Kong Ingress Controller recognizes the kong
IngressClass and
konghq.com/kic-gateway-controller
GatewayClass
by default. Setting the CONTROLLER_INGRESS_CLASS
or
CONTROLLER_GATEWAY_API_CONTROLLER_NAME
environment variable to
another value overrides these defaults.
Add routing configuration
Create routing configuration to proxy /echo
requests to the echo server:
The results should look like this:
Test the routing rule:
curl -i -H 'Host:kong.example' $PROXY_IP/echo
The results should look like this:
HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Content-Length: 140
Connection: keep-alive
Date: Fri, 21 Apr 2023 12:24:55 GMT
X-Kong-Upstream-Latency: 0
X-Kong-Proxy-Latency: 1
Via: kong/3.2.2
Welcome, you are connected to node docker-desktop.
Running on Pod echo-7f87468b8c-tzzv6.
In namespace default.
With IP address 10.1.0.237.
...
If everything is deployed correctly, you should see the above response. This verifies that Kong Gateway can correctly route traffic to an application running inside Kubernetes.
Set up rate limiting
-
Create an instance of the rate-limiting plugin.
echo " apiVersion: configuration.konghq.com/v1 kind: KongPlugin metadata: name: rate-limit annotations: kubernetes.io/ingress.class: kong config: minute: 5 policy: local plugin: rate-limiting " | kubectl apply -f -
The results should look like this:
kongplugin.configuration.konghq.com/rate-limit created
-
Associate the plugin with the Service.
kubectl annotate service echo konghq.com/plugins=rate-limit
The results should look like this:
service/echo annotated
-
Send requests through this Service to rate limiting response headers.
curl -si -H 'Host:kong.example' $PROXY_IP/echo | grep RateLimit
The results should look like this:
RateLimit-Limit: 5 X-RateLimit-Remaining-Minute: 4 X-RateLimit-Limit-Minute: 5 RateLimit-Reset: 60 RateLimit-Remaining: 4
-
Send repeated requests to decrement the remaining limit headers, and block requests after the fifth request.
for i in `seq 6`; do curl -sv -H 'Host:kong.example' $PROXY_IP/echo 2>&1 | grep "< HTTP"; done
The results should look like this:
< HTTP/1.1 200 OK < HTTP/1.1 200 OK < HTTP/1.1 200 OK < HTTP/1.1 200 OK < HTTP/1.1 200 OK < HTTP/1.1 429 Too Many Requests
Scale to multiple pods
-
Scale your Deployment to three replicas, to test with multiple proxy instances.
kubectl scale --replicas 3 -n kong deployment kong-gateway
The results should look like this:
deployment.apps/kong-gateway scaled
-
Check if the status of all the Pods that are
READY
isRunning
using the commandkubectl get pods -n kong
. -
Sending requests to this Service does not reliably decrement the remaining counter.
for i in `seq 10`; do curl -sv -H 'Host:kong.example' $PROXY_IP/echo 2>&1 | grep "X-RateLimit-Remaining-Minute"; done
The results should look like this:
< X-RateLimit-Remaining-Minute: 4 < X-RateLimit-Remaining-Minute: 4 < X-RateLimit-Remaining-Minute: 3 < X-RateLimit-Remaining-Minute: 4 < X-RateLimit-Remaining-Minute: 3 < X-RateLimit-Remaining-Minute: 2 < X-RateLimit-Remaining-Minute: 3 < X-RateLimit-Remaining-Minute: 2 < X-RateLimit-Remaining-Minute: 1 < X-RateLimit-Remaining-Minute: 1
The
policy: local
setting in the plugin configuration tracks request counters in each Pod’s local memory separately. Counters are not synchronized across Pods, so clients can send requests past the limit without being throttled if they route through different Pods.Using a load balancer that distributes client requests to the same Pod can alleviate this somewhat, but changes to the number of replicas can still disrupt accurate accounting. To consistently enforce the limit, the plugin needs to use a shared set of counters across all Pods. The
redis
policy can do this when a Redis instance is available.
Deploy Redis to your Kubernetes cluster
Redis provides an external database for Kong components to store shared data, such as rate limiting counters. There are several options to install it:
Bitnami provides a Helm chart for Redis with turnkey options for authentication.
-
Create a password Secret and replace
PASSWORD
with a password of your choice.kubectl create -n kong secret generic redis-password-secret --from-literal=redis-password=PASSWORD
The results should look like this:
secret/redis-password-secret created
-
Install Redis
helm install -n kong redis oci://registry-1.docker.io/bitnamicharts/redis \ --set auth.existingSecret=redis-password-secret \ --set architecture=standalone
Helm displays the instructions that describes the new installation.
-
Update your plugin configuration with the
redis
policy, Service, and credentials. ReplacePASSWORD
with the password that you set for Redis.kubectl patch kongplugin rate-limit --type json --patch '[ { "op":"replace", "path":"/config/policy", "value":"redis" }, { "op":"add", "path":"/config/redis_host", "value":"redis-master" }, { "op":"add", "path":"/config/redis_password", "value":"PASSWORD" } ]'
The results should look like this:
kongplugin.configuration.konghq.com/rate-limit patched
If the
redis_username
is not set , it uses the defaultredis
user.
Test rate limiting is a multi-node Kong deployment
Send requests to the Service with rate limiting response headers.
for i in `seq 10`; do curl -sv -H 'Host:kong.example' $PROXY_IP/echo 2>&1 | grep "X-RateLimit-Remaining-Minute"; done
The results should look like this:
< X-RateLimit-Remaining-Minute: 4
< X-RateLimit-Remaining-Minute: 3
< X-RateLimit-Remaining-Minute: 2
< X-RateLimit-Remaining-Minute: 1
< X-RateLimit-Remaining-Minute: 0
< X-RateLimit-Remaining-Minute: 0
< X-RateLimit-Remaining-Minute: 0
< X-RateLimit-Remaining-Minute: 0
< X-RateLimit-Remaining-Minute: 0
< X-RateLimit-Remaining-Minute: 0
The counters decrement sequentially regardless of the Kong Gateway replica count.