Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This guide aims to walk you through the process of working on issues and Pull Requests (PRs).
Bear in mind that you will not be able to complete some steps on your own if you do not have “write” permission. Feel free to reach out to the maintainers to help you unlock these activities.
Please note that we have a code of conduct. Make sure that you follow it in all of your interactions with the project.
There are two options to look for the issues to contribute to. The first is our "Up for grabs" board. There the issues are sorted by the required experience level (beginner, intermediate, expert).
The second option is to search for "good first issue"-labeled issues. Some of them might not be displayed on the aforementioned board or vice versa.
You also need to consider labels. You can sort the issues by scope labels, such as scope/backend
, scope/frontend
or even scope/k8s
. If any issue covers several specific areas, and you do not have the required expertise for one of them, just do your part of the work — others will do the rest.
There is a bunch of criteria that make an issue feasible for development. The implementation of any features and/or their enhancements should be reasonable and must be backed by justified requirements (demanded by the community, roadmap plans, etc.). The final decision is left to the maintainers' discretion.
All bugs should be confirmed as such (i.e. the behavior is unintended).
Any issue should be properly triaged by the maintainers beforehand, which includes:
Having a proper milestone set
Having required labels assigned: "accepted" label, scope labels, etc.
Formally, if these triage conditions are met, you can start to work on the issue.
With all these requirements met, feel free to pick the issue you want. Reach out to the maintainers if you have any questions.
Every issue “in progress” needs to be assigned to a corresponding person. To keep the status of the issue clear to everyone, please keep the card's status updated ("project" card to the right of the issue should match the milestone’s name).
Please refer to this guide.
In order to keep branch names uniform and easy to understand, please use the following conventions for branch naming.
Generally speaking, it is a good idea to add a group/type prefix to a branch; e.g., if you are working on a specific branch, you could name it issues/xxx
.
Here is a list of good examples:
issues/123
feature/feature_name
bugfix/fix_thing
\
Java: There is a file called checkstyle.xml
in project root under etc
directory.
You can import it into IntelliJ IDEA via Checkstyle plugin.
REST paths should be written in lowercase and consist of plural nouns only.
Also, multiple words that are placed in a single path segment should be divided by a hyphen (-
).\
Query variable names should be formatted in camelCase
.
Model names should consist of plural nouns only and should be formatted in camelCase
as well.
When creating a PR please do the following:
In commit messages use these closing keywords.
Link an issue(-s) via "linked issues" block.
Set the PR labels. Ensure that you set only the same set of labels that is present in the issue, and ignore yellow status/
labels.
If the PR does not close any of the issues, the PR itself might need to have a milestone set. Reach out to the maintainers to consult.
Assign the PR to yourself. A PR assignee is someone whose goal is to get the PR merged.
Add reviewers. As a rule, reviewers' suggestions are pretty good; please use them.
Upon merging the PR, please use a meaningful commit message, the task name should be fine in this case.
When composing a build, ensure that any install or build dependencies have been removed before the end of the layer.
Update the README.md
with the details of changes made to the interface. This includes new environment variables, exposed ports, useful file locations, and container parameters.
WIP
WIP
Once you installed the prerequisites and cloned the repository, run the following steps in your project directory:
NOTE: If you are an macOS M1 User then please keep in mind below things
Make sure you have ARM supported java installed
Skip the maven tests as they might not be successful
Build a docker image with the app
if you need to build the frontend kafka-ui-react-app
, go here
kafka-ui-react-app-build-documentation
In case you want to build kafka-ui-api
by skipping the tests
To build only the kafka-ui-api
you can use this command:
If this step is successful, it should create a docker image named provectuslabs/kafka-ui
with latest
tag on your local machine except macOS M1.
Using Docker Compose
NOTE: If you are an macOS M1 User then you can use arm64 supported docker compose script
./documentation/compose/kafka-ui-arm64.yaml
Start the kafka-ui
app using docker image built in step 1 along with Kafka clusters:
Using Spring Boot Run
If you want to start only kafka clusters (to run the kafka-ui
app via spring-boot:run
):
Then start the app.
Running in kubernetes
Using Helm Charts
To read more please follow to chart documentation.
To see the kafka-ui
app running, navigate to http://localhost:8080.
About Kafka-UI
UI for Apache Kafka is a versatile, fast, and lightweight web UI for managing Apache Kafka® clusters. Built by developers, for developers.
The app is a free, open-source web UI to monitor and manage Apache Kafka clusters.
UI for Apache Kafka is a simple tool that makes your data flows observable, helps find and troubleshoot issues faster and delivers optimal performance. Its lightweight dashboard makes it easy to track key metrics of your Kafka clusters - Brokers, Topics, Partitions, Production, and Consumption.
Configuration wizard — configure your Kafka clusters right in the UI
Multi-Cluster Management — monitor and manage all your clusters in one place
Performance Monitoring with Metrics Dashboard — track key Kafka metrics with a lightweight dashboard
View Kafka Brokers — view topic and partition assignments, controller status
View Kafka Topics — view partition count, replication status, and custom configuration
View Consumer Groups — view per-partition parked offsets, combined and per-partition lag
Browse Messages — browse messages with JSON, plain text, and Avro encoding
Dynamic Topic Configuration — create and configure new topics with dynamic configuration
Configurable Authentification — secure your installation with optional Github/Gitlab/Google OAuth 2.0
Custom serialization/deserialization plugins - use a ready-to-go serde for your data like AWS Glue or Smile, or code your own!
ODD Integration — Explore and monitor kafka related metadata changes in OpenDataDiscovery platform
We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.
Examples of behavior that contributes to a positive environment for our community include:
Demonstrating empathy and kindness toward other people
Being respectful of differing opinions, viewpoints, and experiences
Giving and gracefully accepting constructive feedback
Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
Focusing on what is best not just for us as individuals, but for the overall community
Examples of unacceptable behavior include:
The use of sexualized language or imagery, and sexual attention or advances of any kind
Trolling, insulting or derogatory comments, and personal or political attacks
Public or private harassment
Publishing others' private information, such as a physical or email address, without their explicit permission
Other conduct which could reasonably be considered inappropriate in a professional setting
Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.
Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.
This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at email kafkaui@provectus.com. All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the reporter of any incident.
Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:
Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.
Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.
Community Impact: A violation through a single incident or series of actions.
Consequence: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.
Community Impact: A serious violation of community standards, including sustained inappropriate behavior.
Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.
Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.
Consequence: A permanent ban from any sort of public interaction within the community.
To run UI for Apache Kafka, you can use either a pre-built Docker image or build it (or a jar file) yourself.
Liveliness and readiness endpoint is at /actuator/health
.
Info endpoint (build info) is located at /actuator/info
.
Kafka-UI Project Roadmap
The roadmap provides a list of features we decided to prioritize in project development. It should serve as a reference point to understand projects' goals.
We do prioritize them based on the feedback from the community, our own vision, and other conditions and circumstances.
The roadmap sets the general way of development. The roadmap is mostly about long-term features. All the features could be re-prioritized, rescheduled, or canceled.
If there's no feature X
, that doesn't mean we're not going to implement it. Feel free to raise the issue for consideration.
If a feature you want to see live is not present on the roadmap, but there's an issue with the feature, feel free to vote for it using reactions to the issue.
Since the roadmap consists mostly of big long-term features, implementing them might be not easy for a beginner outside collaborator.
Role-based access control - to access the UI with granular precision
Data masking - sensitive data in topic messages
This Code of Conduct is adapted from the , version 2.0, available at .
Community Impact Guidelines were inspired by .
For answers to common questions about this code of conduct, see the FAQ at . Translations are available at .
Then access the web UI at
The command is sufficient to try things out. When you're done trying things out, you can proceed with a
Please refer to our page to proceed with further app configuration.
with building
All of the environment variables/config properties could be found .
Please refer to , we'll guide you from there.
Roadmap exists in a form of a GitHub project board and is located .
A good starting point is checking the article.
This page explains how to get the software you need to use on Linux or macOS for local development.
java 17
package or newer
git
installed
docker
installed
Note: For contribution, you must have a
github
account.
Install OpenJDK 17
package or newer:
Check java version using the command java -version
.
Note: In case OpenJDK 17 is not set as your default Java, run sudo update-alternatives --config java
command to list all installed Java versions.
you can set it as the default by entering the selection number for it in the list and pressing Enter. For example, to set Java 17 as the default, you would enter "3" and press Enter.
Install git
:
Install docker
:
To execute the docker
Command without sudo
:
Install brew.
Install brew cask:
Install Eclipse Temurin 17 via Homebrew cask:
Verify Installation
Note: In case OpenJDK 17 is not set as your default Java, you can consider including it in your $PATH
after installation
Consider allocating not less than 4GB of memory for your docker. Otherwise, some apps within a stack (e.g. kafka-ui.yaml
) might crash.
To check how much memory is allocated to docker, use docker info
.
You will find the total memory and used memory in the output. if you won't find used memory that means memory limits are not set for containers.
To allocate 4GB of memory for Docker:
MacOS
Edit docker daemon settings within docker dashboard
For Ubuntu
Open the Docker configuration file in a text editor using the following command:
Add the following line to the file to allocate 4GB of memory to Docker:
Save the file and exit the text editor.
Restart the Docker service using the following command:
Verify that the memory limit has been set correctly by running the following command:
Note that the warning messages are expected as they relate to the kernel not supporting cgroup memory limits.
Now any containers you run in docker will be limited to this amount of memory. You can also increase the memory limit as per your preference.
In the next section, you'll learn how to Build and Run kafka-ui.
Set your git credentials:
More info on setting git credentials:
Build & Run Without Docker
Once you installed the prerequisites and cloned the repository, run the following steps in your project directory:
Execute the jar
NOTE: If you want to get kafka-ui up and running locally quickly without building the jar file manually, then just follow Running Without Docker Quickly
Comment out
docker-maven-plugin
plugin inkafka-ui-api
pom.xml
Command to build the jar
Once your build is successful and the jar file named kafka-ui-api-0.0.1-SNAPSHOT.jar is generated inside
kafka-ui-api/target
.
Execute the jar
Quick start (demo run)
Ensure you have docker installed
Ensure your kafka cluster is available from the machine you're planning to run the app on
Run the following:
When you're done with testing, you can refer to the next articles to persist your config & deploy the app wherever you need to.
Example of how to configure clusters in the configuration file.
Go to `` and configure your first cluster by pressing on "Configure new cluster" button.
To install the app via Helm please refer to this page.
How to Deploy Kafka UI from AWS Marketplace
Go to the AWS Marketplace website and sign in to your account.
Click "Continue to Subscribe" and accept the terms and conditions. Click "Continue to Configuration".
Choose your desired software version and region. Click "Continue to Launch".
Choose "Launch from Website" and select your desired EC2 instance type. You can choose a free tier instance or choose a larger instance depending on your needs. We recommend having at least 512 RAM for an instant.
Next, select the VPC and subnet where you want the instance to be launched. If you don't have an existing VPC or subnet, you can create one by clicking "Create New VPC" or "Create New Subnet".
Choose your security group. A security group acts as a virtual firewall that controls traffic to and from your instance. If you don't have an existing security group, you can create a new one based on the seller settings by clicking "Create New Based on Seller Settings".
Give your security group a name and description. The seller settings will automatically populate the inbound and outbound rules for the security group based on best practices. You can review and modify the rules if necessary.
Click "Save" to create your new security group.
Select your key pair or create a new one. A key pair is used to securely connect to your instance via SSH. If you choose to create a new key pair, give it a name and click "Create". Your private key will be downloaded to your computer, so make sure to keep it in a safe place.
Finally, click "Launch" to deploy your instance. AWS will create the instance and install the Kafka UI software for you.
To check the EC2 state please click on "EC2 console".
After the instance is launched, you can check its status on the EC2 dashboard. Once it's running, you can access the Kafka UI by copying the public DNS name or IP address provided by AWS and adding after the IP address or DNS name port 8080.
Example: ec2-xx-xxx-x-xx.us-west-2.compute.amazonaws.com:8080
If your broker is deployed in AWS then allow incoming from Kafka-ui EC2 by adding an ingress rule in the security group which is used for a broker. If your broker is not in AWS then be sure that your broker can handle requests from Kafka-ui EC2 IP address.
That's it! You've successfully deployed the Kafka UI from AWS Marketplace.
Either use the search bar to find "UI for Apache Kafka" or go to .
More about permissions:
TODO :)
UI for Apache Kafka is also available as a helm chart. See the underlying articles to learn more about it.
A list of ready-to-go docker compose files for various setup scenarios
Please ensure the target volume (~/kui/config.yml
) of your config file does exist.
Create a yml file with the following contents:
Run the compose via:
docker-compose -f <your-file>.yml up -d
How to set up resource limits
There are two options:
To set or change resource limits for pods you need to create the file values.yaml
and add the following lines:
Specify values.yaml
file during chart install
To set limits via CLI you need to specify limits with helm install command.
By default, kafka-ui does not allow to change of its configuration in runtime. When the application is started it reads configuration from system env, config files (ex. application.yaml), and JVM arguments (set by -D
). Once the configuration was read it was treated as immutable and won't be refreshed even if the config source (ex. file) was changed.
Since version 0.6 we added an ability to change cluster configs in runtime. This option is disabled by default and should be implicitly enabled. To enable it, you should set DYNAMIC_CONFIG_ENABLED
env property to true
or add dynamic.config.enabled: true
property to your yaml config file.
Sample docker compose configuration:
You can even omit all vars other than DYNAMIC_CONFIG_ENABLED
to start the application with empty configs and setup it up after startup.
When the dynamic config feature is enabled you will see additional buttons that will take you to "Wizard" for editing existing cluster configuration or adding new clusters:
Kafka-ui is a stateless application by its nature, so, when you edit configuration during runtime, it will store configuration additions on the container's filesystem (in dynamic_config.yaml
file). Dynamic config file will be overridden on each configuration submission.
During the configuration process, you can also upload configuration-related files (like truststore and keystores). They will be stored in etc/kafkaui/uploads
a folder with a unique timestamp suffix to prevent name collision. In the wizard, you can also use files that were mounted to the container's filesystem, without uploading them directly.
Note, that if the container is recreated, your edited (and uploaded) files won't be present and the app will be started with static configuration only. If you want to be able to keep the configuration created by wizard, you have to mount/copy the same files into newly created kafka-ui containers (whole /etc/kafkaui/
folder, by default).
Properties, specified where dynamic config files will be persisted:
DYNAMIC_CONFIG_PATH
dynamic.config.path
/etc/kafkaui/dynamic_config.yaml
Path to dynamic config file
CONFIG_RELATED_UPLOADS_DIR
config.related.uploads.dir
/etc/kafkaui/uploads
Path where uploaded files will be placed
Currently, the new configuration submission leads to a full application restart. So, if your kafka-ui app is starting slow (not a usual case, but may happen when you have a slow connection to kafka clusters) you may notice UI inaccessibility during restart time.
This page explains configuration file structure
Let's start with that there are two possible ways to configure the app, they can interchange each other or even supplement each other.
There are two ways: YAML config & env. variables config. We strongly recommend using YAML in favor of env variables for the most part of the config. You can use env vars to override the default config on some different environments.
This tool can help you to translate your config back and forth from YAML to env vars.
We will mostly provide examples of configs in YAML format, but sometimes single properties might be written in form of env variables.
Rather than writing your config from a scratch, it would be more convenient to use one of the ready-to-go compose examples and adjust it to your needs.
Docker: docker run -it -p 8080:8080 -e spring.config.additional-location=/tmp/config.yml -v /tmp/kui/config.yml:/tmp/config.yml provectuslabs/kafka-ui
Docker compose:
Jar: java -Dspring.config.additional-location=<path-to-application-local.yml> -jar <path-to-jar>.jar
name
: cluster name
bootstrapServers
: where to connect
schemaRegistry
: schemaRegistry's address
schemaRegistryAuth.username
: schemaRegistry's basic authentication username
schemaRegistryAuth.password
: schemaRegistry's basic authentication password
schemaNameTemplate
: how keys are saved to Schema Registry
metrics.port
: open the JMX port of a broker
metrics.type
: Type of metrics, either JMX or PROMETHEUS. Defaulted to JMX.
readOnly
: enable read-only mode
Configure as many clusters as you need by adding their configs below separated with -
.
This guide has been written for MSK Serverless but is applicable for MSK in general as well.
Go to the MSK page
Click "create cluster"
Choose "Custom create"
Choose "Serverless"
Choose VPC and subnets
Choose the default security group or use the existing one
Go to IAM policies
Click "create policy"
Click "JSON"
Paste the following policy example in the editor, and replace "MSK ARN" with the ARN of your MSK cluster
Go to IAM
Click "Create role"
Choose AWS Services and EC2
On the next page find the policy which has been created in the previous step
Go to EC2
Choose your EC2 with Kafka-UI
Go to Actions -> Security -> Modify IAM role
Choose the IAM role from previous step
Click Update IAM role
Quick Start with Helm Chart
Clone/Copy Chart to your working directory
Execute command
Create values.yml file
Install by executing command
helm install helm-release-name charts/kafka-ui -f values.yml
Create config map
This ConfigMap will be mounted to the Pod
Install by executing the command
helm install helm-release-name charts/kafka-ui --set yamlApplicationConfigConfigMap.name="kafka-ui-configmap",yamlApplicationConfigConfigMap.keyName="config.yml"
Create config map
Install by executing the command
helm install helm-release-name charts/kafka-ui --set existingConfigMap="kafka-ui-helm-values"
Most of the Helm charts parameters are common, follow table describes unique parameters related to application configuration.
To install Kafka-UI need to execute follow:
To connect to Kafka-UI web application need to execute:
Open the http://127.0.0.1:8080
on the browser to access Kafka-UI.
To implement SSL for kafka-ui you need to provide JKS files into the pod. Here is the instruction on how to do that.
To create configmap use following command.
If you have specified namespace use command.
Encode secret with base64(You can use this tool https://www.base64encode.org/). Create secret.yaml file with the following content
If you have specified namespace for configmap and secret please use this command
existingConfigMap
Name of the existing ConfigMap with Kafka-UI environment variables
nil
existingSecret
Name of the existing Secret with Kafka-UI environment variables
nil
envs.secret
Set of the sensitive environment variables to pass to Kafka-UI
{}
envs.config
Set of the environment variables to pass to Kafka-UI
{}
yamlApplicationConfigConfigMap
Map with name and keyName keys, name refers to the existing ConfigMap, keyName refers to the ConfigMap key with Kafka-UI config in Yaml format
{}
yamlApplicationConfig
Kafka-UI config in Yaml format
{}
networkPolicy.enabled
Enable network policies
false
networkPolicy.egressRules.customRules
Custom network egress policy rules
[]
networkPolicy.ingressRules.customRules
Custom network ingress policy rules
[]
podLabels
Extra labels for Kafka-UI pod
{}
route.enabled
Enable OpenShift route to expose the Kafka-UI service
false
route.annotations
Add annotations to the OpenShift route
{}
route.tls.enabled
Enable OpenShift route as a secured endpoint
false
route.tls.termination
Set OpenShift Route TLS termination
edge
route.tls.insecureEdgeTerminationPolicy
Set OpenShift Route Insecure Edge Termination Policy
Redirect
if you're running more than one pod and have authentication enabled you will encounter issues with sessions, as we store them in cookies and other instances are not aware of your sessions.
The solution for this would be using sticky session/session affinity.
An example:
Kafka in kraft (zk-less) mode with multiple brokers
Configuration properties for all the things
A reminder: any of these properties can be converted into yaml config properties, an example:
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS
becomes
Basic username+password authentication
In order to enable basic username+passworda authentication add these properties:
Please note that basic auth is not compatible with neither any other auth method nor RBAC.
SERVER_SERVLET_CONTEXT_PATH
URI basePath
LOGGING_LEVEL_ROOT
Setting log level (trace, debug, info, warn, error). Default: info
LOGGING_LEVEL_COM_PROVECTUS
Setting log level (trace, debug, info, warn, error). Default: debug
SERVER_PORT
Port for the embedded server. Default: 8080
KAFKA_ADMIN-CLIENT-TIMEOUT
Kafka API timeout in ms. Default: 30000
KAFKA_CLUSTERS_0_NAME
Cluster name
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS
Address where to connect
KAFKA_CLUSTERS_0_KSQLDBSERVER
KSQL DB server address
KAFKA_CLUSTERS_0_KSQLDBSERVERAUTH_USERNAME
KSQL DB server's basic authentication username
KAFKA_CLUSTERS_0_KSQLDBSERVERAUTH_PASSWORD
KSQL DB server's basic authentication password
KAFKA_CLUSTERS_0_KSQLDBSERVERSSL_KEYSTORELOCATION
Path to the JKS keystore to communicate to KSQL DB
KAFKA_CLUSTERS_0_KSQLDBSERVERSSL_KEYSTOREPASSWORD
Password of the JKS keystore for KSQL DB
KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL
Security protocol to connect to the brokers. For SSL connection use "SSL", for plaintext connection don't set this environment variable
KAFKA_CLUSTERS_0_SCHEMAREGISTRY
SchemaRegistry's address
KAFKA_CLUSTERS_0_SCHEMAREGISTRYAUTH_USERNAME
SchemaRegistry's basic authentication username
KAFKA_CLUSTERS_0_SCHEMAREGISTRYAUTH_PASSWORD
SchemaRegistry's basic authentication password
KAFKA_CLUSTERS_0_SCHEMAREGISTRYSSL_KEYSTORELOCATION
Path to the JKS keystore to communicate to SchemaRegistry
KAFKA_CLUSTERS_0_SCHEMAREGISTRYSSL_KEYSTOREPASSWORD
Password of the JKS keystore for SchemaRegistry
KAFKA_CLUSTERS_0_METRICS_SSL
Enable SSL for Metrics (for PROMETHEUS metrics type). Default: false.
KAFKA_CLUSTERS_0_METRICS_USERNAME
Username for Metrics authentication
KAFKA_CLUSTERS_0_METRICS_PASSWORD
Password for Metrics authentication
KAFKA_CLUSTERS_0_METRICS_KEYSTORELOCATION
Path to the JKS keystore to communicate to metrics source (JMX/PROMETHEUS). For advanced setup, see kafka-ui-jmx-secured.yml
KAFKA_CLUSTERS_0_METRICS_KEYSTOREPASSWORD
Password of the JKS metrics keystore
KAFKA_CLUSTERS_0_SCHEMANAMETEMPLATE
How keys are saved to schemaRegistry
KAFKA_CLUSTERS_0_METRICS_PORT
Open metrics port of a broker
KAFKA_CLUSTERS_0_METRICS_TYPE
Type of metrics retriever to use. Valid values are JMX (default) or PROMETHEUS. If Prometheus, then metrics are read from prometheus-jmx-exporter instead of jmx
KAFKA_CLUSTERS_0_READONLY
Enable read-only mode. Default: false
KAFKA_CLUSTERS_0_KAFKACONNECT_0_NAME
Given name for the Kafka Connect cluster
KAFKA_CLUSTERS_0_KAFKACONNECT_0_ADDRESS
Address of the Kafka Connect service endpoint
KAFKA_CLUSTERS_0_KAFKACONNECT_0_USERNAME
Kafka Connect cluster's basic authentication username
KAFKA_CLUSTERS_0_KAFKACONNECT_0_PASSWORD
Kafka Connect cluster's basic authentication password
KAFKA_CLUSTERS_0_KAFKACONNECT_0_KEYSTORELOCATION
Path to the JKS keystore to communicate to Kafka Connect
KAFKA_CLUSTERS_0_KAFKACONNECT_0_KEYSTOREPASSWORD
Password of the JKS keystore for Kafka Connect
KAFKA_CLUSTERS_0_POLLING_THROTTLE_RATE
Max traffic rate (bytes/sec) that kafka-ui allowed to reach when polling messages from the cluster. Default: 0 (not limited)
KAFKA_CLUSTERS_0_SSL_TRUSTSTORELOCATION
Path to the JKS truststore to communicate to Kafka Connect, SchemaRegistry, KSQL, Metrics
KAFKA_CLUSTERS_0_SSL_TRUSTSTOREPASSWORD
Password of the JKS truststore for Kafka Connect, SchemaRegistry, KSQL, Metrics
TOPIC_RECREATE_DELAY_SECONDS
Time delay between topic deletion and topic creation attempts for topic recreate functionality. Default: 1
TOPIC_RECREATE_MAXRETRIES
Number of attempts of topic creation after topic deletion for topic recreate functionality. Default: 15
DYNAMIC_CONFIG_ENABLED
Allow to change application config in runtime. Default: false.
kafka_internalTopicPrefix
Set a prefix for internal topics. Defaults to "_".
server.reactive.session.timeout
Session timeout. If a duration suffix is not specified, seconds will be used.
The app supports TLS (SSL) and SASL connections for encryption and authentication.\
See this docker-compose file reference for ssl-enabled kafka
SSO require additionaly to configure TLS for application, in that example we will use self-signed certificate, in case of use legal certificates please skip step 1.
At this step we will generate self-signed PKCS12 keypair.
After that need to provide callback URLs, in our case we will use https://127.0.0.1:8080/login/oauth2/code/auth0
This is a main parameters required for enabling SSO
To launch UI for Apache Kafka with enabled TLS and SSO run following:
In the case with trusted CA-signed SSL certificate and SSL termination somewhere outside of application we can pass only SSO related environment variables:
For Azure AD (Office365) OAUTH2 you'll want to add additional environment variables:
Note that scope is created by default when Application registration is done in Azure portal. You'll need to update application registration manifest to include "accessTokenAcceptedVersion": 2
UI for Apache Kafka comes with a built-in library.
More details could be found here:
More about permissions:
Create new application in any SSO provider, we will continue with .
If you're using load balancer/proxy and use HTTP between the proxy and the app, you might want to set server_forward-headers-strategy
to native
as well (SERVER_FORWARDHEADERSSTRATEGY=native
), for more info refer to .
How to configure SASL SCRAM Authentication
Login module control flag not specified in JAAS config
If you are running against confluent cloud and you have specified correctly the jass config and still continue getting these errors look to see if you are passing confluent. the license in the connector, the absence of a license returns a number of bogus errors like "Login module control flag not specified in JAAS config".
https://docs.confluent.io/platform/current/connect/license.html
A good resource for what properties are needed is here: https://gist.github.com/rmoff/49526672990f1b4f7935b62609f6f567
Check the required permissions.
https://github.com/provectus/kafka-ui/discussions/1104#discussioncomment-1656843 https://github.com/provectus/kafka-ui/discussions/1104#discussioncomment-2963449 https://github.com/provectus/kafka-ui/issues/2184#issuecomment-1198506124
Thanks to ahlooli#2666 on discord:
Create a secret in AWS secret manager store that contains key:value pair with 1 username and 1 password, there are certain rules to be followed like the name of the secret (eg. need to start with MSK_ **), so need to refer back to AWS documentation.
Proceed to MSK console and create the MSK cluster, my cluster was the "provisioned" type. Then choose SASL/SCRAM. Another option also needs to follow documentation for your preferred configuration.
After the Cluster has been created, you can then proceed to associate the Secrets created earlier to MSK cluster. (Done in MSK Console)
Then we can proceed to create a custom security group that allows port 9096 (or whichever MSK broker is using). Rough idea:
Source: ESK cluster security group
Type: TCP
Port: 9096
Find out all the MSK's broker ENI. Proceed to attach the above sec. group to each ENI. (if you have 3 brokers which means you have 3 Eni, you need to do it manually 1 by 1)
At this stage, the AWS side should have sufficient permission to allow KAFKA-UI to communicate with it.
Increase webclient.max-in-memory-buffer-size
property value. Default value is 20MB
.
Add the following property server.forward-headers-strategy=FRAMEWORK
You can configure kafka-ui to mask sensitive data shown in Messages page.
Several masking policies supported:
For json objects - remove target fields, otherwise - return "null" string.
Apply examples:
For json objects - replace target field's values with specified replacement string (by default with ***DATA_MASKED***
). Note: if target field's value is object, then replacement applied to all its fields recursively (see example).
Apply examples:
Mask target field's values with specified masking characters, recursively (spaces and line separators will be kept as-is). maskingCharsReplacement
array specifies what symbols will be used to replace upper-case chars (index 0), lower-case chars (index 1), digits (index 2) and other symbols (index 3) correspondingly.
Apply examples:
For each policy, if fields
not specified, then policy will be applied to all object's fields or whole string if it is not a json-object.
You can specify which masks will be applied to topic's keys/values. Multiple policies will be applied if topic matches both policy's patterns.
Yaml configuration example:
Same configuration in env-vars fashion:
Serialization, deserialization and custom plugins
Kafka-ui supports multiple ways to serialize/deserialize data.
Big-endian 4/8 bytes representation of signed/unsigned integers.
Base64 (RFC4648) binary data representation. Can be useful in case if the actual data is not important, but exactly the same (byte-wise) key/value should be send.
Hexadecimal binary data representation. Bytes delimiter and case can be configured.
Class name: com.provectus.kafka.ui.serdes.builtin.HexSerde
Treats binary data as a string in specified encoding. Default encoding is UTF-8.
Class name: com.provectus.kafka.ui.serdes.builtin.StringSerde
Sample configuration (if you want to overwrite default configuration):
Class name: com.provectus.kafka.ui.serdes.builtin.ProtobufFileSerde
Sample configuration:
Deserialize-only serde. Decodes protobuf payload without a predefined schema (like protoc --decode_raw
command).
SchemaRegistry serde is automatically configured if schema registry properties set on cluster level. But you can add new SchemaRegistry-typed serdes that will connect to another schema-registry instance.
Class name: com.provectus.kafka.ui.serdes.builtin.sr.SchemaRegistrySerde
Sample configuration:
You can specify preferable serde for topics key/value. This serde will be chosen by default in UI on topic's view/produce pages. To do so, set topicValuesPattern/topicValuesPattern
properties for the selected serde. Kafka-ui will choose a first serde that matches specified pattern.
Sample configuration:
You can specify which serde will be chosen in UI by default if no other serdes selected via topicKeysPattern/topicValuesPattern
settings.
Sample configuration:
If selected serde couldn't be applied (exception was thrown), then fallback (String serde with UTF-8 encoding) serde will be applied. Such messages will be specially highlighted in UI.
You can implement your own serde and register it in kafka-ui application. To do so:
Add kafka-ui-serde-api
dependency (should be downloadable via maven central)
Implement com.provectus.kafka.ui.serde.api.Serde
interface. See javadoc for implementation requirements.
Pack your serde into uber jar, or provide directory with no-dependency jar and it's dependencies jars
Example pluggable serdes : kafka-smile-serde, kafka-glue-sr-serde
Sample configuration:
The list of supported auth mechanisms for RBAC
Any OAuth provider which is not of the list: Google, GitHub, Cognito.
Set up the auth itself first, docs here and here
Don't forget "custom-params.type: oauth".
Set up google auth first
Set up github auth first
Set up cognito auth first
Set up LDAP auth first
Not yet supported, see Issue 3741
You can map Okta Groups to roles. First, confirm that your okta administrator has included the group
claim or the groups will not be passed in the auth token.
Ensure roles-field
in the auth config is set to groups
and that groups
is included in the scope
, see here for more details.
Configure the role mapping to the okta group via generic provider mentioned above:
Examples of setups for different OAuth providers
In general, the structure of the Oauth2 config looks as follows:
For specific providers like Github (non-enterprise) and Google (see the current list), you don't have to specify URIs as they're well known.
Furthermore, other providers that support OIDC Service Discovery allow fetching URIs configuration from a /.well-known/openid-configuration
endpoint. Depending on your setup, you may only have to set the issuer-uri
of your provider to enable OIDC Service Discovery.
Example of callback URL for github OAuth app settings:
https://www.kafka-ui.provectus.io/login/oauth2/code/github
For the self-hosted installation find the properties a little bit below.
Replace HOSTNAME
by your self-hosted platform FQDN.
Kafka-UI allows you to log all operations to your kafka clusters done within kafka-ui itself.
Logging can be done to either kafka topic and/or console.
See all the available configuration properties:
In this article, we'll guide how to set up Kafka-UI with role-based access control.
First of all, you have to decide if either:
You wish to store all roles in a separate config file
Or within a main config file
This is how you include one more file to start with a docker-compose example:
Alternatively, you can append the roles file contents to your main config file.
In the roles file we define roles, duh. Every each role has access to defined clusters:
A role also has a list of subjects which are the entities we will use to assign roles to. They are provider-dependant, in general, they can be users, groups, or some other entities (github orgs, google domains, LDAP queries, etc.) In this example we define a role memelords
that will contain all the users within the Google domain memelord.lol
and, additionally, a GitHub user Haarolean
. You can combine as many subjects as you want within a role.
A list of supported providers and corresponding subject fetch mechanism:
oauth_google: user
, domain
oauth_github: user
, organization
oauth_cognito: user
, group
ldap: group
ldap_ad: (unsupported yet, will do in 0.8 release)
Find the more detailed examples in a full example file lower.
The next thing which is present in your roles file is, surprisingly, permissions. They consist of:
Resource Can be one of the: CLUSTERCONFIG
, TOPIC
, CONSUMER
, SCHEMA
, CONNECT
, KSQL
, ACL
.
The resource value is either a fixed string or a regular expression identifying a resource. Value is not applicable to clusterconfig
and ksql
resources. Please do not fill it out.
Actions It's a list of actions (the possible values depend on the resource, see the lists below) that will be applied to the certain permission. Also, note, there's a special action for any of the resources called "all", it will virtually grant all the actions within the corresponding resource. An example for enabling viewing and creating topics whose name start with "derp":
Actions
A list of all the actions for the corresponding resources (please note neither resource nor action names are case-sensitive):
applicationconfig
: view
, edit
clusterconfig
: view
, edit
topic
: view
, create
, edit
, delete
, messages_read
, messages_produce
, messages_delete
consumer
: view
, delete
, reset_offsets
schema
: view
, create
, delete
, edit
, modify_global_compatibility
connect
: view
, edit
, create
, restart
ksql
: execute
acl
: view
, edit
A complete file example:
A read-only setup:
An admin-group setup example:
ODD Platform allows you to monitor and navigate kafka data streams and see how they embed into your data platform.
This integration allows you to use kafka-ui as an ODD "Collector" for kafka clusters.
Currently, kafka-ui exports:
kafka topics as ODD Datasets with topic's metadata, configs, and schemas
kafka-connect's connectors as ODD Transformers, including input & output topics and additional connector configs
Configuration properties:
First of all, you'd need to set up authentication method(s). Refer to article for OAuth2 setup.
Kafka-ui has integration with the (ODD).
INTEGATION_ODD_URL
integration.odd.ulr
ODD platform instance URL. Required.
INTEGRATION_ODD_TOKEN
integration.odd.token
Collector's token generated in ODD. Required.
INTEGRATION_ODD_TOPICSREGEX
integration.odd.topicsRegex
RegEx for topic names that should be exported to ODD. Optional, all topics exported by default.
RBAC (Role based access control)
See this example.
See this example.
Planned, see #478
Variables bound to groovy context: partition, timestampMs, keyAsText, valueAsText, header, key (json if possible), value (json if possible).
JSON parsing logic:
Key and Value (if they can be parsed to JSON) they are bound as JSON objects, otherwise bound as nulls.
Sample filters:
keyAsText != null && keyAsText ~"([Gg])roovy"
- regex for key as a string
value.name == "iS.ListItemax" && value.age > 30
- in case value is json
value == null && valueAsText != null
- search for values that are not nulls and are not json
headers.sentBy == "some system" && headers["sentAt"] == "2020-01-01"
multiline filters are also allowed:
Yes, you can. Swagger declaration is located here.