Configuration
This plugin is compatible with DB-less mode.
Compatible protocols
The Confluent Consume plugin is compatible with the following protocols:
grpc
, grpcs
, http
, https
Parameters
Here's a list of all the parameters which can be used in this plugin's configuration:
-
name or plugin
string requiredThe name of the plugin, in this case
confluent-consume
.- If using the Kong Admin API, Konnect API, declarative configuration, or decK files, the field is
name
. - If using the KongPlugin object in Kubernetes, the field is
plugin
.
- If using the Kong Admin API, Konnect API, declarative configuration, or decK files, the field is
-
instance_name
stringAn optional custom name to identify an instance of the plugin, for example
confluent-consume_my-service
.The instance name shows up in Kong Manager and in Konnect, so it's useful when running the same plugin in multiple contexts, for example, on multiple services. You can also use it to access a specific plugin instance via the Kong Admin API.
An instance name must be unique within the following context:
- Within a workspace for Kong Gateway Enterprise
- Within a control plane or control plane group for Konnect
- Globally for Kong Gateway (OSS)
-
service.name or service.id
stringThe name or ID of the service the plugin targets. Set one of these parameters if adding the plugin to a service through the top-level
/plugins
endpoint. Not required if using/services/{serviceName|Id}/plugins
. -
route.name or route.id
stringThe name or ID of the route the plugin targets. Set one of these parameters if adding the plugin to a route through the top-level
/plugins
endpoint. Not required if using/routes/{routeName|Id}/plugins
. -
consumer.name or consumer.id
stringThe name or ID of the consumer the plugin targets. Set one of these parameters if adding the plugin to a consumer through the top-level
/plugins
endpoint. Not required if using/consumers/{consumerName|Id}/plugins
. -
enabled
boolean default:true
Whether this plugin will be applied.
-
config
record required-
bootstrap_servers
set of typerecord
Set of bootstrap brokers in a
{host: host, port: port}
list format.-
host
string requiredA string representing a host name, such as example.com.
-
port
integer required between:0
65535
An integer representing a port number between 0 and 65535, inclusive.
-
-
topics
array of typerecord
required len_min:1
The Kafka topics and their configuration you want to consume from.
-
name
string required
-
-
mode
string required default:http-get
Must be one of:server-sent-events
,http-get
The mode of operation for the plugin.
-
message_deserializer
string required default:noop
Must be one of:json
,noop
The deserializer to use for the consumed messages.
-
auto_offset_reset
string required default:latest
Must be one of:earliest
,latest
The offset to start from when there is no initial offset in the consumer group.
-
commit_strategy
string required default:auto
Must be one of:auto
,off
The strategy to use for committing offsets.
-
timeout
integer default:10000
Socket timeout in milliseconds.
-
keepalive
integer default:60000
Keepalive timeout in milliseconds.
-
keepalive_enabled
boolean default:false
-
cluster_api_key
string required referenceable encryptedUsername/Apikey for SASL authentication.
-
cluster_api_secret
string required referenceable encryptedPassword/ApiSecret for SASL authentication.
-
confluent_cloud_api_key
string referenceable encryptedApikey for authentication with Confluent Cloud. This allows for management tasks such as creating topics, ACLs, etc.
-
confluent_cloud_api_secret
string referenceable encryptedThe corresponding secret for the Confluent Cloud API key.
-
cluster_name
stringAn identifier for the Kafka cluster. By default, this field generates a random string. You can also set your own custom cluster identifier. If more than one Kafka plugin is configured without a
cluster_name
(that is, if the default autogenerated value is removed), these plugins will use the same producer, and by extension, the same cluster. Logs will be sent to the leader of the cluster.
-