Only this pageAll pages
Powered by GitBook
1 of 54

UI for Apache Kafka

Overview

Loading...

Loading...

Loading...

Project

Loading...

Loading...

Development

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Quick Start

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Configuration

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

FAQ

Loading...

Loading...

About

About Kafka-UI

UI for Apache Kafka is a versatile, fast, and lightweight web UI for managing Apache Kafka® clusters. Built by developers, for developers.

The app is a free, open-source web UI to monitor and manage Apache Kafka clusters.

UI for Apache Kafka is a simple tool that makes your data flows observable, helps find and troubleshoot issues faster and delivers optimal performance. Its lightweight dashboard makes it easy to track key metrics of your Kafka clusters - Brokers, Topics, Partitions, Production, and Consumption.

Features

  • Configuration wizard — configure your Kafka clusters right in the UI

  • Multi-Cluster Management — monitor and manage all your clusters in one place

  • Performance Monitoring with Metrics Dashboard — track key Kafka metrics with a lightweight dashboard

  • View Kafka Brokers — view topic and partition assignments, controller status

  • View Kafka Topics — view partition count, replication status, and custom configuration

  • View Consumer Groups — view per-partition parked offsets, combined and per-partition lag

  • Browse Messages — browse messages with JSON, plain text, and Avro encoding

  • Dynamic Topic Configuration — create and configure new topics with dynamic configuration

  • Configurable Authentification — secure your installation with optional Github/Gitlab/Google OAuth 2.0

  • Custom serialization/deserialization plugins - use a ready-to-go serde for your data like AWS Glue or Smile, or code your own!

  • Role-based access control - manage permissions to access the UI with granular precision

  • Data masking - obfuscate sensitive data in topic messages

  • ODD Integration — Explore and monitor kafka related metadata changes in OpenDataDiscovery platform

K8s / Helm

To install the app via Helm please refer to this page.

Helm charts

UI for Apache Kafka is also available as a helm chart. See the underlying articles to learn more about it.

Roadmap

Kafka-UI Project Roadmap

Roadmap exists in a form of a GitHub project board and is located .

How to use this document

The roadmap provides a list of features we decided to prioritize in project development. It should serve as a reference point to understand projects' goals.

We do prioritize them based on the feedback from the community, our own vision, and other conditions and circumstances.

The roadmap sets the general way of development. The roadmap is mostly about long-term features. All the features could be re-prioritized, rescheduled, or canceled.

If there's no feature X, that doesn't mean we're not going to implement it. Feel free to raise the issue for consideration. If a feature you want to see live is not present on the roadmap, but there's an issue with the feature, feel free to vote for it using reactions to the issue.

How to contribute

Since the roadmap consists mostly of big long-term features, implementing them might be not easy for a beginner outside collaborator.

A good starting point is checking the article.

Setting up git

Set your git credentials:

More info on setting git credentials:

here
contributing
git config --global user.name "Mona Lisa"
git config --global user.email "[email protected]"
Setting your username in Git
Setting your commit email address

Configuration file

This page explains configuration file structure

Let's start with that there are two possible ways to configure the app, they can interchange each other or even supplement each other.

There are two ways: YAML config & env. variables config. We strongly recommend using YAML in favor of env variables for the most part of the config. You can use env vars to override the default config on some different environments.

This tool can help you to translate your config back and forth from YAML to env vars.

We will mostly provide examples of configs in YAML format, but sometimes single properties might be written in form of env variables.

Rather than writing your config from a scratch, it would be more convenient to use one of the ready-to-go compose examples and adjust it to your needs.

Providing a config path for the app instance:

Docker: docker run -it -p 8080:8080 -e spring.config.additional-location=/tmp/config.yml -v /tmp/kui/config.yml:/tmp/config.yml provectuslabs/kafka-ui

Docker compose:

services:
  kafka-ui:
    container_name: kafka-ui
    image: provectuslabs/kafka-ui:latest
    environment:
      KAFKA_CLUSTERS_0_NAME: local
      # other properties, omitted
      SPRING_CONFIG_ADDITIONAL-LOCATION: /config.yml
    volumes:
      - /tmp/config.yml:/config.yml

Jar: java -Dspring.config.additional-location=<path-to-application-local.yml> -jar <path-to-jar>.jar

Basic config structure

kafka:
  clusters:
    -
      name: local
      bootstrapServers: localhost:29091
      schemaRegistry: http://localhost:8085
      schemaRegistryAuth:
        username: username
        password: password
#     schemaNameTemplate: "%s-value"
      metrics:
        port: 9997
        type: JMX
  • name: cluster name

  • bootstrapServers: where to connect

  • schemaRegistry: schemaRegistry's address

  • schemaRegistryAuth.username: schemaRegistry's basic authentication username

  • schemaRegistryAuth.password: schemaRegistry's basic authentication password

  • schemaNameTemplate: how keys are saved to Schema Registry

  • metrics.port: open the JMX port of a broker

  • metrics.type: Type of metrics, either JMX or PROMETHEUS. Defaulted to JMX.

  • readOnly: enable read-only mode

Configure as many clusters as you need by adding their configs below separated with -.

Demo run

Quick start (demo run)

  1. Ensure you have docker installed

  2. Ensure your kafka cluster is available from the machine you're planning to run the app on

  3. Run the following:

docker run -it -p 8080:8080 -e DYNAMIC_CONFIG_ENABLED=true provectuslabs/kafka-ui
  1. Go to `http://localhost:8080/ui/clusters/create-new-cluster` and configure your first cluster by pressing on "Configure new cluster" button.

When you're done with testing, you can refer to the next articles to persist your config & deploy the app wherever you need to.

Persistent start

Please ensure the target volume (~/kui/config.yml) of your config file does exist.

Create a yml file with the following contents:

services:
  kafka-ui:
    container_name: kafka-ui
    image: provectuslabs/kafka-ui:latest
    ports:
      - 8080:8080
    environment:
      DYNAMIC_CONFIG_ENABLED: true
    volumes:
      - ~/kui/config.yml:/etc/kafkaui/dynamic_config.yaml

Run the compose via:

docker-compose -f <your-file>.yml up -d

Kafka Permissions

Standalone Kafka ACLs

ACLs required to run the app

ACLs for standalone kafka

This list is enough to run the app in r/o mode

 Permission |    Operation     | ResourceType | ResourceName  | PatternType
------------+------------------+--------------+---------------+--------------
 ALLOW      | READ             | TOPIC        | *             | LITERAL
 ALLOW      | DESCRIBE_CONFIGS | TOPIC        | *             | LITERAL
 ALLOW      | DESCRIBE         | GROUP        | *             | LITERAL
 ALLOW      | DESCRIBE         | CLUSTER      | kafka-cluster | LITERAL
 ALLOW      | DESCRIBE_CONFIGS | CLUSTER      | kafka-cluster | LITERAL

Complex configuration examples

Building

LDAP / Active Directory

auth:
  type: LDAP
spring:
  ldap:
    urls: ldap://localhost:10389
    base: "cn={0},ou=people,dc=planetexpress,dc=com"
    admin-user: "cn=admin,dc=planetexpress,dc=com"
    admin-password: "GoodNewsEveryone"
    user-filter-search-base: "dc=planetexpress,dc=com"
    user-filter-search-filter: "(&(uid={0})(objectClass=inetOrgPerson))"
    group-filter-search-base: "ou=people,dc=planetexpress,dc=com" # required for RBAC
oauth2:
  ldap:
    activeDirectory: false
    aсtiveDirectory:
      domain: memelord.lol

WIP: Testing

TODO :)

Prerequisites

Without Docker

Build & Run Without Docker

Once you installed the prerequisites and cloned the repository, run the following steps in your project directory:

Running Without Docker Quickly

Execute the jar

  • Example of how to configure clusters in the configuration file.

Building And Running Without Docker

NOTE: If you want to get kafka-ui up and running locally quickly without building the jar file manually, then just follow Running Without Docker Quickly

Comment out docker-maven-plugin plugin in kafka-ui-api pom.xml

  • Command to build the jar

Once your build is successful and the jar file named kafka-ui-api-0.0.1-SNAPSHOT.jar is generated inside kafka-ui-api/target.

  • Execute the jar

Compose examples

A list of ready-to-go docker compose files for various setup scenarios

Sticky sessions

if you're running more than one pod and have authentication enabled you will encounter issues with sessions, as we store them in cookies and other instances are not aware of your sessions.

The solution for this would be using sticky session/session affinity.

An example:

Kafka w/ SSL

Connecting to a Secure Broker

The app supports TLS (SSL) and SASL connections for .\

Running From Docker-compose file

See this docker-compose file reference for ssl-enabled kafka

Kafka Permissions
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/affinity-mode: balanced
    nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
    nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
    nginx.ingress.kubernetes.io/session-cookie-name: kafka-ui
java -Dspring.config.additional-location=<path-to-application-local.yml> --add-opens java.rmi/javax.rmi.ssl=ALL-UNNAMED -jar <path-to-kafka-ui-jar>
Download the latest kafka-ui jar file
application-local.yml
https://github.com/provectus/kafka-ui/blob/master/documentation/compose/DOCKER_COMPOSE.md
encryption and authentication

MSK (+Serverless) Setup

This guide has been written for MSK Serverless but is applicable for MSK in general as well.

Authentication options for Kafka-UI:

KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL=SASL_SSL
KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM=AWS_MSK_IAM
KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG='software.amazon.msk.auth.iam.IAMLoginModule required;'
KAFKA_CLUSTERS_0_PROPERTIES_SASL_CLIENT_CALLBACK_HANDLER_CLASS='software.amazon.msk.auth.iam.IAMClientCallbackHandler'

Creating an instance

  1. Go to the MSK page

  2. Click "create cluster"

  3. Choose "Custom create"

  4. Choose "Serverless"

  5. Choose VPC and subnets

  6. Choose the default security group or use the existing one

Creating a policy

  1. Go to IAM policies

  2. Click "create policy"

  3. Click "JSON"

  4. Paste the following policy example in the editor, and replace "MSK ARN" with the ARN of your MSK cluster

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "kafka-cluster:DescribeCluster",
                "kafka-cluster:AlterCluster",
                "kafka-cluster:Connect"
            ],
            "Resource": "arn:aws:kafka:eu-central-1:297478128798:cluster/test-wizard/7b39802a-21ac-48fe-b6e8-a7baf2ae2533-s2"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "kafka-cluster:DeleteGroup",
                "kafka-cluster:DescribeCluster",
                "kafka-cluster:ReadData",
                "kafka-cluster:DescribeTopicDynamicConfiguration",
                "kafka-cluster:AlterTopicDynamicConfiguration",
                "kafka-cluster:AlterGroup",
                "kafka-cluster:AlterClusterDynamicConfiguration",
                "kafka-cluster:AlterTopic",
                "kafka-cluster:CreateTopic",
                "kafka-cluster:DescribeTopic",
                "kafka-cluster:AlterCluster",
                "kafka-cluster:DescribeGroup",
                "kafka-cluster:DescribeClusterDynamicConfiguration",
                "kafka-cluster:Connect",
                "kafka-cluster:DeleteTopic",
                "kafka-cluster:WriteData"
            ],
            "Resource": "arn:aws:kafka:eu-central-1:297478128798:topic/test-wizard/7b39802a-21ac-48fe-b6e8-a7baf2ae2533-s2/*"
        },
        {
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": [
                "kafka-cluster:AlterGroup",
                "kafka-cluster:DescribeGroup"
            ],
            "Resource": "arn:aws:kafka:eu-central-1:297478128798:group/test-wizard/7b39802a-21ac-48fe-b6e8-a7baf2ae2533-s2/*"
        }
    ]
}

Attaching the policy to a user

Creating a role for EC2

  1. Go to IAM

  2. Click "Create role"

  3. Choose AWS Services and EC2

  4. On the next page find the policy which has been created in the previous step

Attaching the role to the EC2 instance

  1. Go to EC2

  2. Choose your EC2 with Kafka-UI

  3. Go to Actions -> Security -> Modify IAM role

  4. Choose the IAM role from previous step

  5. Click Update IAM role

AWS Marketplace

How to Deploy Kafka UI from AWS Marketplace

Step 1: Go to AWS Marketplace

Go to the AWS Marketplace website and sign in to your account.

Step 2: Find UI for Apache Kafka

Either use the search bar to find "UI for Apache Kafka" or go to marketplace product page.

Step 3: Subscribe and Configure

Click "Continue to Subscribe" and accept the terms and conditions. Click "Continue to Configuration".

Step 4: Choose the Software Version and Region

Choose your desired software version and region. Click "Continue to Launch".

Step 5: Launch the Instance

Choose "Launch from Website" and select your desired EC2 instance type. You can choose a free tier instance or choose a larger instance depending on your needs. We recommend having at least 512 RAM for an instant.

Next, select the VPC and subnet where you want the instance to be launched. If you don't have an existing VPC or subnet, you can create one by clicking "Create New VPC" or "Create New Subnet".

Choose your security group. A security group acts as a virtual firewall that controls traffic to and from your instance. If you don't have an existing security group, you can create a new one based on the seller settings by clicking "Create New Based on Seller Settings".

Give your security group a name and description. The seller settings will automatically populate the inbound and outbound rules for the security group based on best practices. You can review and modify the rules if necessary.

Click "Save" to create your new security group.

Select your key pair or create a new one. A key pair is used to securely connect to your instance via SSH. If you choose to create a new key pair, give it a name and click "Create". Your private key will be downloaded to your computer, so make sure to keep it in a safe place.

Finally, click "Launch" to deploy your instance. AWS will create the instance and install the Kafka UI software for you.

Step 6: Check EC2 Status

To check the EC2 state please click on "EC2 console".

Step 7: Access the Kafka UI

After the instance is launched, you can check its status on the EC2 dashboard. Once it's running, you can access the Kafka UI by copying the public DNS name or IP address provided by AWS and adding after the IP address or DNS name port 8080. Example: ec2-xx-xxx-x-xx.us-west-2.compute.amazonaws.com:8080

Step 8: Configure Kafka UI to Communicate with Brokers

If your broker is deployed in AWS then allow incoming from Kafka-ui EC2 by adding an ingress rule in the security group which is used for a broker. If your broker is not in AWS then be sure that your broker can handle requests from Kafka-ui EC2 IP address.

More about permissions: MSK (+Serverless) Setup

That's it! You've successfully deployed the Kafka UI from AWS Marketplace.

With Docker

Build & Run

Once you installed the prerequisites and cloned the repository, run the following steps in your project directory:

Step 1 : Build

NOTE: If you are an macOS M1 User then please keep in mind below things

Make sure you have ARM supported java installed

Skip the maven tests as they might not be successful

  • Build a docker image with the app

./mvnw clean install -Pprod
  • if you need to build the frontend kafka-ui-react-app, go here

    • kafka-ui-react-app-build-documentation

  • In case you want to build kafka-ui-api by skipping the tests

./mvnw clean install -Dmaven.test.skip=true -Pprod
  • To build only the kafka-ui-api you can use this command:

./mvnw -f kafka-ui-api/pom.xml clean install -Pprod -DskipUIBuild=true

If this step is successful, it should create a docker image named provectuslabs/kafka-ui with latest tag on your local machine except macOS M1.

Step 2 : Run

Using Docker Compose

NOTE: If you are an macOS M1 User then you can use arm64 supported docker compose script ./documentation/compose/kafka-ui-arm64.yaml

  • Start the kafka-ui app using docker image built in step 1 along with Kafka clusters:

docker-compose -f ./documentation/compose/kafka-ui.yaml up -d

Using Spring Boot Run

  • If you want to start only kafka clusters (to run the kafka-ui app via spring-boot:run):

docker-compose -f ./documentation/compose/kafka-clusters-only.yaml up -d
  • Then start the app.

./mvnw spring-boot:run -Pprod

# or

./mvnw spring-boot:run -Pprod -Dspring.config.location=file:///path/to/conf.yaml

Running in kubernetes

  • Using Helm Charts

helm repo add kafka-ui https://provectus.github.io/kafka-ui
helm install kafka-ui kafka-ui/kafka-ui

To read more please follow to chart documentation.

Step 3 : Access Kafka-UI

  • To see the kafka-ui app running, navigate to http://localhost:8080.

Quick start

Quick Start with Helm Chart

General

  1. Clone/Copy Chart to your working directory

  2. Execute command

    helm repo add kafka-ui https://provectus.github.io/kafka-ui-charts
    helm install kafka-ui kafka-ui/kafka-ui

Passing Kafka-UI configuration as Dict

Create values.yml file

yamlApplicationConfig:
  kafka:
    clusters:
      - name: yaml
        bootstrapServers:  kafka-cluster-broker-endpoints:9092
  auth:
    type: disabled
  management:
    health:
      ldap:
        enabled: false

Install by executing command

helm install helm-release-name charts/kafka-ui -f values.yml

Passing configuration file as ConfigMap

Create config map

apiVersion: v1
kind: ConfigMap
metadata:
  name: kafka-ui-configmap
data:
  config.yml: |-
    kafka:
      clusters:
        - name: yaml
          bootstrapServers: kafka-cluster-broker-endpoints:9092
    auth:
      type: disabled
    management:
      health:
        ldap:
          enabled: false

This ConfigMap will be mounted to the Pod

Install by executing the command

helm install helm-release-name charts/kafka-ui --set yamlApplicationConfigConfigMap.name="kafka-ui-configmap",yamlApplicationConfigConfigMap.keyName="config.yml"

Passing environment variables as ConfigMap

Create config map

apiVersion: v1
kind: ConfigMap
metadata:
  name: kafka-ui-helm-values
data:
  KAFKA_CLUSTERS_0_NAME: "kafka-cluster-name"
  KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: "kafka-cluster-broker-endpoints:9092"
  AUTH_TYPE: "DISABLED"
  MANAGEMENT_HEALTH_LDAP_ENABLED: "FALSE" 

Install by executing the command

helm install helm-release-name charts/kafka-ui --set existingConfigMap="kafka-ui-helm-values"

SSL example

Implement ssl for kafka-ui

To implement SSL for kafka-ui you need to provide JKS files into the pod. Here is the instruction on how to do that.

Create config map with content from kafka.truststore.jks and kafka.keystore.jks.

To create configmap use following command.

kubectl create configmap ssl-files --from-file=kafka.truststore.jks --from-file=kafka.keystore.jks

If you have specified namespace use command.

kubectl create configmap ssl-files --from-file=kafka.truststore.jks --from-file=kafka.keystore.jks -n {{namespace}

Create secret.

Encode secret with base64(You can use this tool https://www.base64encode.org/). Create secret.yaml file with the following content

apiVersion: v1
kind: Secret
metadata:
 name: ssl-secret
 # Specify namespace if needed, uncomment next line and provide namespace
 #namespace: {namespace}
type: Opaque
data:
 KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_PASSWORD: ##Base 64 encoded secret
 KAFKA_CLUSTERS_0_PROPERTIES_SSL_KEYSTORE_PASSWORD: ##Base 64 encoded secret

Create ssl-values.yaml file with the following content.

existingSecret: "ssl-files"


env:
- name:  KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_LOCATION 
  value:  /ssl/kafka.truststore.jks
- name: KAFKA_CLUSTERS_0_PROPERTIES_SSL_KEYSTORE_LOCATION
  value: /ssl/kafka.keystore.jks


volumeMounts:
 - name: config-volume
   mountPath: /ssl

volumes:
 - name: config-volume
   configMap:
     name: ssl-files

Install chart with command

helm install kafka-ui kafka-ui/kafka-ui-charts -f ssl-values.yaml

If you have specified namespace for configmap and secret please use this command

helm install kafka-ui kafka-ui/kafka-ui -f ssl-values.yaml -n {namespace}

Authentication

Resource limits

How to set up resource limits

There are two options:

Set limits via changing values.yaml

To set or change resource limits for pods you need to create the file values.yaml and add the following lines:

resources:
   limits:
     cpu: 200m
     memory: 512Mi
   requests:
     cpu: 200m
     memory: 256Mi

Specify values.yaml file during chart install

helm install kafka-ui kafka-ui/kafka-ui -f values.yaml

Set limits via CLI

To set limits via CLI you need to specify limits with helm install command.

helm install kafka-ui kafka-ui/kafka-ui --set resources.limits.cpu=200m --set resources.limits.memory=512Mi --set resources.requests.memory=256Mi --set resources.requests.cpu=200m 

OpenDataDiscovery Integration

Kafka-ui has integration with the OpenDataDiscovery platform (ODD).

ODD Platform allows you to monitor and navigate kafka data streams and see how they embed into your data platform.

This integration allows you to use kafka-ui as an ODD "Collector" for kafka clusters.

Currently, kafka-ui exports:

  • kafka topics as ODD Datasets with topic's metadata, configs, and schemas

  • kafka-connect's connectors as ODD Transformers, including input & output topics and additional connector configs

Configuration properties:

Env variable name
Yaml property
Description

INTEGATION_ODD_URL

integration.odd.ulr

ODD platform instance URL. Required.

INTEGRATION_ODD_TOKEN

integration.odd.token

Collector's token generated in ODD. Required.

INTEGRATION_ODD_TOPICSREGEX

integration.odd.topicsRegex

RegEx for topic names that should be exported to ODD. Optional, all topics exported by default.

Code of Conduct

Contributor Covenant Code of Conduct

Our Pledge

We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.

We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.

Our Standards

Examples of behavior that contributes to a positive environment for our community include:

  • Demonstrating empathy and kindness toward other people

  • Being respectful of differing opinions, viewpoints, and experiences

  • Giving and gracefully accepting constructive feedback

  • Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience

  • Focusing on what is best not just for us as individuals, but for the overall community

Examples of unacceptable behavior include:

  • The use of sexualized language or imagery, and sexual attention or advances of any kind

  • Trolling, insulting or derogatory comments, and personal or political attacks

  • Public or private harassment

  • Publishing others' private information, such as a physical or email address, without their explicit permission

  • Other conduct which could reasonably be considered inappropriate in a professional setting

Enforcement Responsibilities

Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.

Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.

Scope

This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.

Enforcement

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at email [email protected]. All complaints will be reviewed and investigated promptly and fairly.

All community leaders are obligated to respect the privacy and security of the reporter of any incident.

Enforcement Guidelines

Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:

1. Correction

Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.

Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.

2. Warning

Community Impact: A violation through a single incident or series of actions.

Consequence: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.

3. Temporary Ban

Community Impact: A serious violation of community standards, including sustained inappropriate behavior.

Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.

4. Permanent Ban

Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.

Consequence: A permanent ban from any sort of public interaction within the community.

Attribution

This Code of Conduct is adapted from the , version 2.0, available at .

Community Impact Guidelines were inspired by .

For answers to common questions about this code of conduct, see the FAQ at . Translations are available at .

FAQ

Basic (username password) authentication

Role-based access control

OAuth 2

LDAP

See example.

Active Directory (LDAP)

See example.

SAML

Planned, see

Smart filters syntax

Variables bound to groovy context: partition, timestampMs, keyAsText, valueAsText, header, key (json if possible), value (json if possible).

JSON parsing logic:

Key and Value (if they can be parsed to JSON) they are bound as JSON objects, otherwise bound as nulls.

Sample filters:

  1. keyAsText != null && keyAsText ~"([Gg])roovy" - regex for key as a string

  2. value.name == "iS.ListItemax" && value.age > 30 - in case value is json

  3. value == null && valueAsText != null - search for values that are not nulls and are not json

  4. headers.sentBy == "some system" && headers["sentAt"] == "2020-01-01"

  5. multiline filters are also allowed:

Can I use the app as API?

Yes, you can. Swagger declaration is located .

Getting started

To run UI for Apache Kafka, you can use either a pre-built Docker image or build it (or a jar file) yourself.

Quick start (Demo run)

Then access the web UI at

The command is sufficient to try things out. When you're done trying things out, you can proceed with a

Persistent installation

Please refer to our page to proceed with further app configuration.

Some useful configuration-related links

Helm charts

Building from sources

with building

Liveliness and readiness probes

Liveliness and readiness endpoint is at /actuator/health. Info endpoint (build info) is located at /actuator/info.

Configuration options

All of the environment variables/config properties could be found .

Contributing

Please refer to , we'll guide you from there.

AWS IAM

How to configure AWS IAM Authentication

UI for Apache Kafka comes with a built-in library.

You could pass SASL configs in the properties section for each cluster.

More details could be found here:

More about permissions:

Examples:

Please replace

  • <KAFKA_URL> with broker list

  • <PROFILE_NAME> with your AWS profile

Running From Docker Image

Configuring by application.yaml

Contributor Covenant
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html
Mozilla's code of conduct enforcement ladder
https://www.contributor-covenant.org/faq
https://www.contributor-covenant.org/translations
def name = value.name
def age = value.age
name == "iliax" && age == 30
Basic Authentication
RBAC (Role based access control)
OAuth2
this
this
#478
here
docker run -it -p 8080:8080 -e DYNAMIC_CONFIG_ENABLED=true provectuslabs/kafka-ui
services:
  kafka-ui:
    container_name: kafka-ui
    image: provectuslabs/kafka-ui:latest
    ports:
      - 8080:8080
    environment:
      DYNAMIC_CONFIG_ENABLED: true
    volumes:
      - ~/kui/config.yml:/etc/kafkaui/dynamic_config.yaml
http://localhost:8080
persistent installation
configuration
Web UI Cluster Configuration Wizard
Configuration file explanation
Docker Compose examples
Misc configuration properties
Quick start
Quick start
here
contributing guide
docker run -p 8080:8080 \
    -e KAFKA_CLUSTERS_0_NAME=local \
    -e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=<KAFKA_URL> \
    -e KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL=SASL_SSL \
    -e KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM=AWS_MSK_IAM \
    -e KAFKA_CLUSTERS_0_PROPERTIES_SASL_CLIENT_CALLBACK_HANDLER_CLASS=software.amazon.msk.auth.iam.IAMClientCallbackHandler \
    -e KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG=software.amazon.msk.auth.iam.IAMLoginModule required awsProfileName="<PROFILE_NAME>"; \
    -d provectuslabs/kafka-ui:latest 
kafka:
  clusters:
    - name: local
      bootstrapServers: <KAFKA_URL>
      properties:
        security.protocol: SASL_SSL
        sasl.mechanism: AWS_MSK_IAM
        sasl.client.callback.handler.class: software.amazon.msk.auth.iam.IAMClientCallbackHandler
        sasl.jaas.config: software.amazon.msk.auth.iam.IAMLoginModule required awsProfileName="<PROFILE_NAME>";
aws-msk-iam-auth
aws-msk-iam-auth
MSK (+Serverless) Setup

Prerequisites

Prerequisites

This page explains how to get the software you need to use on Linux or macOS for local development.

  • java 17 package or newer

  • git installed

  • docker installed

Note: For contribution, you must have a github account.

For Linux

  1. Install OpenJDK 17 package or newer:

sudo apt update
sudo apt install openjdk-17-jdk
  • Check java version using the command java -version.

openjdk version "17.0.5" 2022-10-18
OpenJDK Runtime Environment (build 17.0.5+8-Ubuntu-2ubuntu120.04)
OpenJDK 64-Bit Server VM (build 17.0.5+8-Ubuntu-2ubuntu120.04, mixed mode, sharing)

Note: In case OpenJDK 17 is not set as your default Java, run sudo update-alternatives --config java command to list all installed Java versions.

Selection    Path                                            Priority   Status
------------------------------------------------------------
* 0            /usr/lib/jvm/java-11-openjdk-amd64/bin/java      1111      auto mode
  1            /usr/lib/jvm/java-11-openjdk-amd64/bin/java      1111      manual mode
  2            /usr/lib/jvm/java-16-openjdk-amd64/bin/java      1051      manual mode
  3            /usr/lib/jvm/java-17-openjdk-amd64/bin/java      1001      manual mode

Press <enter> to keep the current choice[*], or type selection number:

you can set it as the default by entering the selection number for it in the list and pressing Enter. For example, to set Java 17 as the default, you would enter "3" and press Enter.

  1. Install git:

sudo apt install git
  1. Install docker:

sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository -y "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
apt-cache policy docker-ce
sudo apt -y install docker-ce

To execute the docker Command without sudo:

sudo usermod -aG docker ${USER}
su - ${USER}
sudo chmod 666 /var/run/docker.sock

For macOS

  1. Install brew.

  2. Install brew cask:

brew cask
  1. Install Eclipse Temurin 17 via Homebrew cask:

brew tap homebrew/cask-versions
brew install temurin17
  1. Verify Installation

java -version

Note: In case OpenJDK 17 is not set as your default Java, you can consider including it in your $PATH after installation

export PATH="$(/usr/libexec/java_home -v 17)/bin:$PATH"
export JAVA_HOME="$(/usr/libexec/java_home -v 17)"

Tips

Consider allocating not less than 4GB of memory for your docker. Otherwise, some apps within a stack (e.g. kafka-ui.yaml) might crash.

To check how much memory is allocated to docker, use docker info.

You will find the total memory and used memory in the output. if you won't find used memory that means memory limits are not set for containers.

To allocate 4GB of memory for Docker:

MacOS

Edit docker daemon settings within docker dashboard

For Ubuntu

  1. Open the Docker configuration file in a text editor using the following command:

sudo nano /etc/default/docker
  1. Add the following line to the file to allocate 4GB of memory to Docker:

DOCKER_OPTS="--default-ulimit memlock=-1:-1 --memory=4g --memory-swap=-1"
  1. Save the file and exit the text editor.

  2. Restart the Docker service using the following command:

sudo service docker restart
  1. Verify that the memory limit has been set correctly by running the following command:

docker info | grep -i memory

Note that the warning messages are expected as they relate to the kernel not supporting cgroup memory limits.

Now any containers you run in docker will be limited to this amount of memory. You can also increase the memory limit as per your preference.

Where to go next

In the next section, you'll learn how to Build and Run kafka-ui.

Supported Identity Providers

The list of supported auth mechanisms for RBAC

Generic OAuth

Any OAuth provider which is not of the list: Google, GitHub, Cognito.

Set up the auth itself first, docs here and here

Don't forget "custom-params.type: oauth".

      subjects:
        - provider: oauth
          type: role
          value: "role-name"

Google

Set up google auth first

        - provider: oauth_google
          type: domain
          value: "memelord.lol"
        - provider: oauth_google
          type: user
          value: "[email protected]"

Github

Set up github auth first

        - provider: oauth_github
          type: organization
          value: "provectus"
        - provider: oauth_github
          type: user
          value: "memelord"

Cognito

Set up cognito auth first

        - provider: oauth_cognito
          type: user
          value: "zoidberg"
        - provider: oauth_cognito
          type: group
          value: "memelords"

LDAP

Set up LDAP auth first

        - provider: ldap
          type: group
          value: "admin_staff"

Active Directory

Not yet supported, see Issue 3741

       - provider: ldap_ad # NOT YET SUPPORTED, SEE ISSUE 3741
          type: group
          value: "admin_staff"

Okta

You can map Okta Groups to roles. First, confirm that your okta administrator has included the group claim or the groups will not be passed in the auth token.

Ensure roles-field in the auth config is set to groups and that groups is included in the scope, see here for more details.

Configure the role mapping to the okta group via generic provider mentioned above:

      subjects:
        - provider: oauth
          type: role
          value: "<okta-group-name>"

Audit log

Kafka-UI allows you to log all operations to your kafka clusters done within kafka-ui itself.

Logging can be done to either kafka topic and/or console.

See all the available configuration properties:

kafka:
  clusters:
    - name: local
      audit:
        topic-audit-enabled: true
        console-audit-enabled: true
        topic: '__kui-audit-log' # default name
        audit-topic-properties: # any kafka topic properties in format of a map
          - retention.ms: 43200000
        audit-topics-partitions: 1 # how many partitions, default is 1
        level: all # either ALL or ALTER_ONLY (default). ALL will log all read operations.

Configuration wizard

Dynamic application configuration

By default, kafka-ui does not allow to change of its configuration in runtime. When the application is started it reads configuration from system env, config files (ex. application.yaml), and JVM arguments (set by -D). Once the configuration was read it was treated as immutable and won't be refreshed even if the config source (ex. file) was changed.

Since version 0.6 we added an ability to change cluster configs in runtime. This option is disabled by default and should be implicitly enabled. To enable it, you should set DYNAMIC_CONFIG_ENABLED env property to true or add dynamic.config.enabled: true property to your yaml config file.

Sample docker compose configuration:

services:
  kafka-ui:
    container_name: kafka-ui
    image: provectuslabs/kafka-ui:latest
    ports:
      - 8080:8080
    depends_on:
      - kafka0
    environment:
      DYNAMIC_CONFIG_ENABLED: 'true'
      KAFKA_CLUSTERS_0_NAME: wizard_test
      KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka0:29092
      
  ... 

You can even omit all vars other than DYNAMIC_CONFIG_ENABLED to start the application with empty configs and setup it up after startup.

When the dynamic config feature is enabled you will see additional buttons that will take you to "Wizard" for editing existing cluster configuration or adding new clusters:

Dynamic config files

Kafka-ui is a stateless application by its nature, so, when you edit configuration during runtime, it will store configuration additions on the container's filesystem (in dynamic_config.yaml file). Dynamic config file will be overridden on each configuration submission.

During the configuration process, you can also upload configuration-related files (like truststore and keystores). They will be stored in etc/kafkaui/uploads a folder with a unique timestamp suffix to prevent name collision. In the wizard, you can also use files that were mounted to the container's filesystem, without uploading them directly.

Note, that if the container is recreated, your edited (and uploaded) files won't be present and the app will be started with static configuration only. If you want to be able to keep the configuration created by wizard, you have to mount/copy the same files into newly created kafka-ui containers (whole /etc/kafkaui/ folder, by default).

Properties, specified where dynamic config files will be persisted:

Env variable name
Yaml property
Default
Description

DYNAMIC_CONFIG_PATH

dynamic.config.path

/etc/kafkaui/dynamic_config.yaml

Path to dynamic config file

CONFIG_RELATED_UPLOADS_DIR

config.related.uploads.dir

/etc/kafkaui/uploads

Path where uploaded files will be placed

Implementation notes:

Currently, the new configuration submission leads to a full application restart. So, if your kafka-ui app is starting slow (not a usual case, but may happen when you have a slow connection to kafka clusters) you may notice UI inaccessibility during restart time.

SSO Guide

How to configure SSO

SSO require additionaly to configure TLS for application, in that example we will use self-signed certificate, in case of use legal certificates please skip step 1.

Step 1

At this step we will generate self-signed PKCS12 keypair.

mkdir cert
keytool -genkeypair -alias ui-for-apache-kafka -keyalg RSA -keysize 2048 \
  -storetype PKCS12 -keystore cert/ui-for-apache-kafka.p12 -validity 3650

Step 2

Create new application in any SSO provider, we will continue with Auth0.

After that need to provide callback URLs, in our case we will use https://127.0.0.1:8080/login/oauth2/code/auth0

This is a main parameters required for enabling SSO

Step 3

To launch UI for Apache Kafka with enabled TLS and SSO run following:

docker run -p 8080:8080 -v `pwd`/cert:/opt/cert -e AUTH_TYPE=LOGIN_FORM \
  -e SECURITY_BASIC_ENABLED=true \
  -e SERVER_SSL_KEY_STORE_TYPE=PKCS12 \
  -e SERVER_SSL_KEY_STORE=/opt/cert/ui-for-apache-kafka.p12 \
  -e SERVER_SSL_KEY_STORE_PASSWORD=123456 \
  -e SERVER_SSL_KEY_ALIAS=ui-for-apache-kafka \
  -e SERVER_SSL_ENABLED=true \
  -e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_CLIENTID=uhvaPKIHU4ZF8Ne4B6PGvF0hWW6OcUSB \
  -e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_CLIENTSECRET=YXfRjmodifiedTujnkVr7zuW9ECCAK4TcnCio-i \
  -e SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_AUTH0_ISSUER_URI=https://dev-a63ggcut.auth0.com/ \
  -e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_SCOPE=openid \
  -e TRUST_STORE=/opt/cert/ui-for-apache-kafka.p12 \
  -e TRUST_STORE_PASSWORD=123456 \
provectuslabs/kafka-ui:latest

In the case with trusted CA-signed SSL certificate and SSL termination somewhere outside of application we can pass only SSO related environment variables:

docker run -p 8080:8080 -v `pwd`/cert:/opt/cert -e AUTH_TYPE=OAUTH2 \
  -e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_CLIENTID=uhvaPKIHU4ZF8Ne4B6PGvF0hWW6OcUSB \
  -e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_CLIENTSECRET=YXfRjmodifiedTujnkVr7zuW9ECCAK4TcnCio-i \
  -e SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_AUTH0_ISSUER_URI=https://dev-a63ggcut.auth0.com/ \
  -e SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_AUTH0_SCOPE=openid \
provectuslabs/kafka-ui:latest

Step 4 (Load Balancer HTTP) (optional)

If you're using load balancer/proxy and use HTTP between the proxy and the app, you might want to set server_forward-headers-strategy to native as well (SERVER_FORWARDHEADERSSTRATEGY=native), for more info refer to this issue.

Step 5 (Azure) (optional)

For Azure AD (Office365) OAUTH2 you'll want to add additional environment variables:

docker run -p 8080:8080 \
        -e KAFKA_CLUSTERS_0_NAME="${cluster_name}"\
        -e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS="${kafka_listeners}" \
        -e KAFKA_CLUSTERS_0_ZOOKEEPER="${zookeeper_servers}" \
        -e KAFKA_CLUSTERS_0_KAFKACONNECT_0_ADDRESS="${kafka_connect_servers}"
        -e AUTH_TYPE=OAUTH2 \
        -e AUTH_OAUTH2_CLIENT_AZURE_CLIENTID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" \
        -e AUTH_OAUTH2_CLIENT_AZURE_CLIENTSECRET="somesecret" \
        -e AUTH_OAUTH2_CLIENT_AZURE_SCOPE="openid" \
        -e AUTH_OAUTH2_CLIENT_AZURE_CLIENTNAME="azure" \
        -e AUTH_OAUTH2_CLIENT_AZURE_PROVIDER="azure" \
        -e AUTH_OAUTH2_CLIENT_AZURE_ISSUERURI="https://login.microsoftonline.com/{tenant_id}/v2.0" \
        -e AUTH_OAUTH2_CLIENT_AZURE_JWKSETURI="https://login.microsoftonline.com/{tenant_id}/discovery/v2.0/keys" \
        -d provectuslabs/kafka-ui:latest"

Note that scope is created by default when Application registration is done in Azure portal. You'll need to update application registration manifest to include "accessTokenAcceptedVersion": 2

SASL_SCRAM

How to configure SASL SCRAM Authentication

You could pass sasl configs in properties section for each cluster.

Examples:

Please replace

  • <KAFKA_NAME> with cluster name

  • <KAFKA_URL> with broker list

  • <KAFKA_USERNAME> with username

  • <KAFKA_PASSWORD> with password

Running From Docker Image

Running From Docker-compose file

Configuring by application.yaml

docker run -p 8080:8080 \
    -e KAFKA_CLUSTERS_0_NAME=<KAFKA_NAME> \
    -e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=<KAFKA_URL> \
    -e KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL=SASL_SSL \
    -e KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM=SCRAM-SHA-512 \     
    -e KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG=org.apache.kafka.common.security.scram.ScramLoginModule required username="<KAFKA_USERNAME>" password="<KAFKA_PASSWORD>"; \
    -d provectuslabs/kafka-ui:latest 

version: '3.4'
services:
  
  kafka-ui:
    image: provectuslabs/kafka-ui
    container_name: kafka-ui
    ports:
      - "888:8080"
    restart: always
    environment:
      - KAFKA_CLUSTERS_0_NAME=<KAFKA_NAME>
      - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=<KAFKA_URL>
      - KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL=SASL_SSL
      - KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM=SCRAM-SHA-512
      - KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG=org.apache.kafka.common.security.scram.ScramLoginModule required username="<KAFKA_USERNAME>" password="<KAFKA_PASSWORD>";
      - KAFKA_CLUSTERS_0_PROPERTIES_PROTOCOL=SASL
kafka:
  clusters:
    - name: local
      bootstrapServers: <KAFKA_URL>
      properties:
        security.protocol: SASL_SSL
        sasl.mechanism: SCRAM-SHA-512        
        sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="<KAFKA_USERNAME>" password="<KAFKA_PASSWORD>";

Configuration

Most of the Helm charts parameters are common, follow table describes unique parameters related to application configuration.

Kafka-UI parameters

Parameter
Description
Default

existingConfigMap

Name of the existing ConfigMap with Kafka-UI environment variables

nil

existingSecret

Name of the existing Secret with Kafka-UI environment variables

nil

envs.secret

Set of the sensitive environment variables to pass to Kafka-UI

{}

envs.config

Set of the environment variables to pass to Kafka-UI

{}

yamlApplicationConfigConfigMap

Map with name and keyName keys, name refers to the existing ConfigMap, keyName refers to the ConfigMap key with Kafka-UI config in Yaml format

{}

yamlApplicationConfig

Kafka-UI config in Yaml format

{}

networkPolicy.enabled

Enable network policies

false

networkPolicy.egressRules.customRules

Custom network egress policy rules

[]

networkPolicy.ingressRules.customRules

Custom network ingress policy rules

[]

podLabels

Extra labels for Kafka-UI pod

{}

route.enabled

Enable OpenShift route to expose the Kafka-UI service

false

route.annotations

Add annotations to the OpenShift route

{}

route.tls.enabled

Enable OpenShift route as a secured endpoint

false

route.tls.termination

Set OpenShift Route TLS termination

edge

route.tls.insecureEdgeTerminationPolicy

Set OpenShift Route Insecure Edge Termination Policy

Redirect

Example

To install Kafka-UI need to execute follow:

helm repo add kafka-ui https://provectus.github.io/kafka-ui
helm install kafka-ui kafka-ui/kafka-ui --set envs.config.KAFKA_CLUSTERS_0_NAME=local --set envs.config.KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092

To connect to Kafka-UI web application need to execute:

kubectl port-forward svc/kafka-ui 8080:80

Open the http://127.0.0.1:8080 on the browser to access Kafka-UI.

Contributing

This guide aims to walk you through the process of working on issues and Pull Requests (PRs).

Bear in mind that you will not be able to complete some steps on your own if you do not have “write” permission. Feel free to reach out to the maintainers to help you unlock these activities.

General recommendations

Please note that we have a code of conduct. Make sure that you follow it in all of your interactions with the project.

Issues

Choosing an issue

There are two options to look for the issues to contribute to. The first is our "Up for grabs" board. There the issues are sorted by the required experience level (beginner, intermediate, expert).

The second option is to search for "good first issue"-labeled issues. Some of them might not be displayed on the aforementioned board or vice versa.

You also need to consider labels. You can sort the issues by scope labels, such as scope/backend, scope/frontend or even scope/k8s. If any issue covers several specific areas, and you do not have the required expertise for one of them, just do your part of the work — others will do the rest.

Grabbing the issue

There is a bunch of criteria that make an issue feasible for development. The implementation of any features and/or their enhancements should be reasonable and must be backed by justified requirements (demanded by the community, roadmap plans, etc.). The final decision is left to the maintainers' discretion.

All bugs should be confirmed as such (i.e. the behavior is unintended).

Any issue should be properly triaged by the maintainers beforehand, which includes:

  1. Having a proper milestone set

  2. Having required labels assigned: "accepted" label, scope labels, etc.

Formally, if these triage conditions are met, you can start to work on the issue.

With all these requirements met, feel free to pick the issue you want. Reach out to the maintainers if you have any questions.

Working on the issue

Every issue “in progress” needs to be assigned to a corresponding person. To keep the status of the issue clear to everyone, please keep the card's status updated ("project" card to the right of the issue should match the milestone’s name).

Setting up a local development environment

Please refer to this guide.

Pull Requests

Branch naming

In order to keep branch names uniform and easy to understand, please use the following conventions for branch naming.

Generally speaking, it is a good idea to add a group/type prefix to a branch; e.g., if you are working on a specific branch, you could name it issues/xxx.

Here is a list of good examples: issues/123 feature/feature_name bugfix/fix_thing\

Code style

Java: There is a file called checkstyle.xml in project root under etc directory. You can import it into IntelliJ IDEA via Checkstyle plugin.

Naming conventions

REST paths should be written in lowercase and consist of plural nouns only. Also, multiple words that are placed in a single path segment should be divided by a hyphen (-).\

Query variable names should be formatted in camelCase.

Model names should consist of plural nouns only and should be formatted in camelCase as well.

Creating a PR

When creating a PR please do the following:

  1. In commit messages use these closing keywords.

  2. Link an issue(-s) via "linked issues" block.

  3. Set the PR labels. Ensure that you set only the same set of labels that is present in the issue, and ignore yellow status/ labels.

  4. If the PR does not close any of the issues, the PR itself might need to have a milestone set. Reach out to the maintainers to consult.

  5. Assign the PR to yourself. A PR assignee is someone whose goal is to get the PR merged.

  6. Add reviewers. As a rule, reviewers' suggestions are pretty good; please use them.

  7. Upon merging the PR, please use a meaningful commit message, the task name should be fine in this case.

Pull Request checklist

  1. When composing a build, ensure that any install or build dependencies have been removed before the end of the layer.

  2. Update the README.md with the details of changes made to the interface. This includes new environment variables, exposed ports, useful file locations, and container parameters.

Reviewing a PR

WIP

Pull Request reviewer checklist

WIP

Data masking

Topics data masking

You can configure kafka-ui to mask sensitive data shown in Messages page.

Several masking policies supported:

REMOVE

For json objects - remove target fields, otherwise - return "null" string.

- type: REMOVE
  fields: [ "id", "name" ]
  ...

Apply examples:

{ "id": 1234, "name": { "first": "James" }, "age": 30 } 
 ->
{ "age": 30 } 
non-json string -> null

REPLACE

For json objects - replace target field's values with specified replacement string (by default with ***DATA_MASKED***). Note: if target field's value is object, then replacement applied to all its fields recursively (see example).

- type: REPLACE
  fields: [ "id", "name" ]
  replacement: "***"  #optional, "***DATA_MASKED***" by default
  ...

Apply examples:

{ "id": 1234, "name": { "first": "James", "last": "Bond" }, "age": 30 } 
 ->
{ "id": "***", "name": { "first": "***", "last": "***" }, "age": 30 } 
non-json string -> ***

MASK

Mask target field's values with specified masking characters, recursively (spaces and line separators will be kept as-is). maskingCharsReplacement array specifies what symbols will be used to replace upper-case chars (index 0), lower-case chars (index 1), digits (index 2) and other symbols (index 3) correspondingly.

- type: MASK
  fields: [ "id", "name" ]
  maskingCharsReplacement: ["A", "a", "N", "_"]   # optional, default is ["X", "x", "n", "-"]
  ...

Apply examples:

{ "id": 1234, "name": { "first": "James", "last": "Bond!" }, "age": 30 } 
 ->
{ "id": "NNNN", "name": { "first": "Aaaaa", "last": "Aaaa_" }, "age": 30 } 
Some string! -> Aaaa aaaaaa_

For each policy, if fields not specified, then policy will be applied to all object's fields or whole string if it is not a json-object.

You can specify which masks will be applied to topic's keys/values. Multiple policies will be applied if topic matches both policy's patterns.

Yaml configuration example:

kafka:
  clusters:
    - name: ClusterName
      # Other Cluster configuration omitted ... 
      masking:
        - type: REMOVE
          fields: [ "id" ]
          topicKeysPattern: "events-with-ids-.*"
          topicValuesPattern: "events-with-ids-.*"
          
        - type: REPLACE
          fields: [ "companyName", "organizationName" ]
          replacement: "***MASKED_ORG_NAME***"   #optional
          topicValuesPattern: "org-events-.*"
        
        - type: MASK
          fields: [ "name", "surname" ]
          maskingCharsReplacement: ["A", "a", "N", "_"]  #optional
          topicValuesPattern: "user-states"

        - type: MASK
          topicValuesPattern: "very-secured-topic"

Same configuration in env-vars fashion:

...
KAFKA_CLUSTERS_0_MASKING_0_TYPE: REMOVE
KAFKA_CLUSTERS_0_MASKING_0_FIELDS_0: "id"
KAFKA_CLUSTERS_0_MASKING_0_TOPICKEYSPATTERN: "events-with-ids-.*"
KAFKA_CLUSTERS_0_MASKING_0_TOPICVALUESPATTERN: "events-with-ids-.*"

KAFKA_CLUSTERS_0_MASKING_1_TYPE: REPLACE
KAFKA_CLUSTERS_0_MASKING_1_FIELDS_0: "companyName"
KAFKA_CLUSTERS_0_MASKING_1_FIELDS_1: "organizationName"
KAFKA_CLUSTERS_0_MASKING_1_REPLACEMENT: "***MASKED_ORG_NAME***"
KAFKA_CLUSTERS_0_MASKING_1_TOPICVALUESPATTERN: "org-events-.*"

KAFKA_CLUSTERS_0_MASKING_2_TYPE: MASK
KAFKA_CLUSTERS_0_MASKING_2_FIELDS_0: "name"
KAFKA_CLUSTERS_0_MASKING_2_FIELDS_1: "surname"
KAFKA_CLUSTERS_0_MASKING_2_MASKING_CHARS_REPLACEMENT_0: 'A'
KAFKA_CLUSTERS_0_MASKING_2_MASKING_CHARS_REPLACEMENT_1: 'a'
KAFKA_CLUSTERS_0_MASKING_2_MASKING_CHARS_REPLACEMENT_2: 'N'
KAFKA_CLUSTERS_0_MASKING_2_MASKING_CHARS_REPLACEMENT_3: '_'
KAFKA_CLUSTERS_0_MASKING_2_TOPICVALUESPATTERN: "user-states"

KAFKA_CLUSTERS_0_MASKING_3_TYPE: MASK
KAFKA_CLUSTERS_0_MASKING_3_TOPICVALUESPATTERN: "very-secured-topic"

Kraft mode + multiple brokers

Kafka in kraft (zk-less) mode with multiple brokers

---
version: '2'
services:
  kafka-ui:
    container_name: kafka-ui
    image: provectuslabs/kafka-ui:latest
    ports:
      - 8080:8080
    depends_on:
      - kafka0
      - kafka1
      - kafka2
      - schema-registry0
      - kafka-connect0
    environment:
      KAFKA_CLUSTERS_0_NAME: local
      KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka0:29092,kafka1:29092,kafka2:29092
      KAFKA_CLUSTERS_0_METRICS_PORT: 9997
      KAFKA_CLUSTERS_0_SCHEMAREGISTRY: http://schema-registry0:8085
      KAFKA_CLUSTERS_0_KAFKACONNECT_0_NAME: first
      KAFKA_CLUSTERS_0_KAFKACONNECT_0_ADDRESS: http://kafka-connect0:8083

  kafka0:
    image: confluentinc/cp-kafka:7.2.1
    hostname: kafka0
    container_name: kafka0
    ports:
      - 9092:9092
      - 9997:9997
    environment:
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka0:29092,PLAINTEXT_HOST://localhost:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_PROCESS_ROLES: 'broker,controller'
      KAFKA_CLUSTER_ID:
      KAFKA_NODE_ID: 1
      KAFKA_CONTROLLER_QUORUM_VOTERS: '1@kafka0:29093,2@kafka1:29093,3@kafka2:29093'
      KAFKA_LISTENERS: 'PLAINTEXT://kafka0:29092,CONTROLLER://kafka0:29093,PLAINTEXT_HOST://0.0.0.0:9092'
      KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
      KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
      KAFKA_JMX_PORT: 9997
      KAFKA_JMX_OPTS: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=kafka0 -Dcom.sun.management.jmxremote.rmi.port=9997
    volumes:
      - ./scripts/update_run_cluster.sh:/tmp/update_run.sh
      - ./scripts/clusterID:/tmp/clusterID
    command: "bash -c 'if [ ! -f /tmp/update_run.sh ]; then echo \"ERROR: Did you forget the update_run.sh file that came with this docker-compose.yml file?\" && exit 1 ; else /tmp/update_run.sh && /etc/confluent/docker/run ; fi'"

  kafka1:
    image: confluentinc/cp-kafka:7.2.1
    hostname: kafka1
    container_name: kafka1
    environment:
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka1:29092,PLAINTEXT_HOST://localhost:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_PROCESS_ROLES: 'broker,controller'
      KAFKA_NODE_ID: 2
      KAFKA_CONTROLLER_QUORUM_VOTERS: '1@kafka0:29093,2@kafka1:29093,3@kafka2:29093'
      KAFKA_LISTENERS: 'PLAINTEXT://kafka1:29092,CONTROLLER://kafka1:29093,PLAINTEXT_HOST://0.0.0.0:9092'
      KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
      KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
      KAFKA_JMX_PORT: 9997
      KAFKA_JMX_OPTS: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=kafka1 -Dcom.sun.management.jmxremote.rmi.port=9997
    volumes:
      - ./scripts/update_run_cluster.sh:/tmp/update_run.sh
      - ./scripts/clusterID:/tmp/clusterID
    command: "bash -c 'if [ ! -f /tmp/update_run.sh ]; then echo \"ERROR: Did you forget the update_run.sh file that came with this docker-compose.yml file?\" && exit 1 ; else /tmp/update_run.sh && /etc/confluent/docker/run ; fi'"

  kafka2:
    image: confluentinc/cp-kafka:7.2.1
    hostname: kafka2
    container_name: kafka2
    environment:
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka2:29092,PLAINTEXT_HOST://localhost:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_PROCESS_ROLES: 'broker,controller'
      KAFKA_NODE_ID: 3
      KAFKA_CONTROLLER_QUORUM_VOTERS: '1@kafka0:29093,2@kafka1:29093,3@kafka2:29093'
      KAFKA_LISTENERS: 'PLAINTEXT://kafka2:29092,CONTROLLER://kafka2:29093,PLAINTEXT_HOST://0.0.0.0:9092'
      KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
      KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
      KAFKA_JMX_PORT: 9997
      KAFKA_JMX_OPTS: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=kafka1 -Dcom.sun.management.jmxremote.rmi.port=9997
    volumes:
      - ./scripts/update_run_cluster.sh:/tmp/update_run.sh
      - ./scripts/clusterID:/tmp/clusterID
    command: "bash -c 'if [ ! -f /tmp/update_run.sh ]; then echo \"ERROR: Did you forget the update_run.sh file that came with this docker-compose.yml file?\" && exit 1 ; else /tmp/update_run.sh && /etc/confluent/docker/run ; fi'"


  schema-registry0:
    image: confluentinc/cp-schema-registry:7.2.1
    ports:
      - 8085:8085
    depends_on:
      - kafka0
    environment:
      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: PLAINTEXT://kafka0:29092
      SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL: PLAINTEXT
      SCHEMA_REGISTRY_HOST_NAME: schema-registry0
      SCHEMA_REGISTRY_LISTENERS: http://schema-registry0:8085

      SCHEMA_REGISTRY_SCHEMA_REGISTRY_INTER_INSTANCE_PROTOCOL: "http"
      SCHEMA_REGISTRY_LOG4J_ROOT_LOGLEVEL: INFO
      SCHEMA_REGISTRY_KAFKASTORE_TOPIC: _schemas

  kafka-connect0:
    image: confluentinc/cp-kafka-connect:7.2.1
    ports:
      - 8083:8083
    depends_on:
      - kafka0
      - schema-registry0
    environment:
      CONNECT_BOOTSTRAP_SERVERS: kafka0:29092
      CONNECT_GROUP_ID: compose-connect-group
      CONNECT_CONFIG_STORAGE_TOPIC: _connect_configs
      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_OFFSET_STORAGE_TOPIC: _connect_offset
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_STATUS_STORAGE_TOPIC: _connect_status
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
      CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry0:8085
      CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.storage.StringConverter
      CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry0:8085
      CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_REST_ADVERTISED_HOST_NAME: kafka-connect0
      CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components,/usr/share/filestream-connectors,/tmp/kfk"
    volumes:
      - /tmp/kfk:/tmp/kfk:ro
      - /tmp/kfk/test.txt:/tmp/kfk/test.txt

  kafka-init-topics:
    image: confluentinc/cp-kafka:7.2.1
    volumes:
      - ./message.json:/data/message.json
    depends_on:
      - kafka0
    command: "bash -c 'echo Waiting for Kafka to be ready... && \
               cub kafka-ready -b kafka0:29092 1 30 && \
               kafka-topics --create --topic second.users --partitions 3 --replication-factor 1 --if-not-exists --bootstrap-server kafka0:29092 && \
               kafka-topics --create --topic second.messages --partitions 2 --replication-factor 1 --if-not-exists --bootstrap-server kafka0:29092 && \
               kafka-topics --create --topic first.messages --partitions 2 --replication-factor 1 --if-not-exists --bootstrap-server kafka0:29092 && \
               kafka-console-producer --bootstrap-server kafka0:29092 --topic second.users < /data/message.json'"

OAuth2

Examples of setups for different OAuth providers

Generic configuration

In general, the structure of the Oauth2 config looks as follows:

Service Discovery

For specific providers like Github (non-enterprise) and Google (), you don't have to specify URIs as they're well known.

Furthermore, other providers that support allow fetching URIs configuration from a /.well-known/openid-configuration endpoint. Depending on your setup, you may only have to set the issuer-uri of your provider to enable OIDC Service Discovery.

Provider config examples

Cognito

Google

Azure

GitHub

Example of callback URL for github OAuth app settings:

https://www.kafka-ui.provectus.io/login/oauth2/code/github

For the self-hosted installation find the properties a little bit below.

Self-hosted/Cloud (GitHub Enterprise Server)

Replace HOSTNAME by your self-hosted platform FQDN.

Okta

Keycloak

auth:
  type: OAUTH2
  oauth2:
    client:
      <unique_name>:
        clientId: xxx
        clientSecret: yyy
        scope: openid
        client-name: cognito # will be displayed on the login page
        provider: <provider>
        redirect-uri: http://localhost:8080/login/oauth2/code/<provider>
        authorization-grant-type: authorization_code
        issuer-uri: https://xxx
        jwk-set-uri: https://yyy/.well-known/jwks.json
        user-name-attribute: <zzz>
        custom-params:
          type: <provider_type> # fill this if you're gonna use RBAC. Supported values: cognito, google, github, oauth (for other generic providers)
          roles-field: groups # required for RBAC, a field name in OAuth token which will contain user's roles/groups
kafka:
  clusters:
    - name: local
      bootstrapServers: localhost:9092
    # ...

auth:
  type: OAUTH2
  oauth2:
    client:
      cognito:
        clientId: xxx
        clientSecret: yyy
        scope: openid
        client-name: cognito
        provider: cognito
        redirect-uri: http://localhost:8080/login/oauth2/code/cognito
        authorization-grant-type: authorization_code
        issuer-uri: https://cognito-idp.eu-central-1.amazonaws.com/eu-central-1_xxx
        jwk-set-uri: https://cognito-idp.eu-central-1.amazonaws.com/eu-central-1_xxx/.well-known/jwks.json
        user-name-attribute: cognito:username
        custom-params:
          type: cognito
          logoutUrl: https://<XXX>>.eu-central-1.amazoncognito.com/logout #required just for cognito
kafka:
  clusters:
    - name: local
      bootstrapServers: localhost:9092
    # ...

auth:
  type: OAUTH2
  oauth2:
    client:
      google:
        provider: google
        clientId: xxx.apps.googleusercontent.com
        clientSecret: GOCSPX-xxx
        user-name-attribute: email
        custom-params:
          type: google
          allowedDomain: provectus.com # for RBAC
kafka:
  clusters:
    - name: local
      bootstrapServers: localhost:9092
    # ...

auth:
  type: OAUTH2
  oauth2:
    client:
      azure:
        clientId: "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
        clientSecret: "somesecret"
        scope: openid
        client-name: azure
        provider: azure
        issuer-uri: "https://login.microsoftonline.com/{tenant_id}/v2.0"
        jwk-set-uri: "https://login.microsoftonline.com/{tenant_id}/discovery/v2.0/keys"
kafka:
  clusters:
    - name: local
      bootstrapServers: localhost:9092
    # ...

auth:
  type: OAUTH2
  oauth2:
    client:
      github:
        provider: github
        clientId: xxx
        clientSecret: yyy
        scope: read:org
        user-name-attribute: login
        custom-params:
          type: github
kafka:
  clusters:
    - name: local
      bootstrapServers: localhost:9092
    # ...

auth:
  type: OAUTH2
  oauth2:
    client:
      github:
        provider: github
        clientId: xxx
        clientSecret: yyy
        scope: read:org
        user-name-attribute: login
        authorization-uri: http(s)://HOSTNAME/login/oauth/authorize
        token-uri: http(s)://HOSTNAME/login/oauth/access_token
        user-info-uri: http(s)://HOSTNAME/api/v3/user
        custom-params:
          type: github      
auth:
  type: OAUTH2
  oauth2:
    client:
      okta:
        clientId: xxx
        clientSecret: yyy
        scope: [ 'openid', 'profile', 'email', 'groups' ] # default for okta + groups for rbac
        client-name: Okta
        provider: okta
        redirect-uri: http://localhost:8080/login/oauth2/code/okta
        authorization-grant-type: authorization_code
        issuer-uri: https://<okta_domain>.okta.com
        jwk-set-uri: https://yyy/.well-known/jwks.json
        user-name-attribute: sub # default for okta, "email" also available
        custom-params:
          type: oauth
          roles-field: groups # required for RBAC
auth:
  type: OAUTH2
  oauth2:
    client:
      keycloak:
        clientId: xxx
        clientSecret: yyy
        scope: openid
        issuer-uri: https://<keycloak_instance>/auth/realms/<realm>
        user-name-attribute: preferred_username
        client-name: keycloak
        provider: keycloak
        custom-params:
          type: keycloak
see the current list
OIDC Service Discovery

Misc configuration properties

Configuration properties for all the things

A reminder: any of these properties can be converted into yaml config properties, an example:

KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS

becomes

kafka:
  clusters:
    - bootstrapServers: xxx
Name
Description

SERVER_SERVLET_CONTEXT_PATH

URI basePath

LOGGING_LEVEL_ROOT

Setting log level (trace, debug, info, warn, error). Default: info

LOGGING_LEVEL_COM_PROVECTUS

Setting log level (trace, debug, info, warn, error). Default: debug

SERVER_PORT

Port for the embedded server. Default: 8080

KAFKA_ADMIN-CLIENT-TIMEOUT

Kafka API timeout in ms. Default: 30000

KAFKA_CLUSTERS_0_NAME

Cluster name

KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS

Address where to connect

KAFKA_CLUSTERS_0_KSQLDBSERVER

KSQL DB server address

KAFKA_CLUSTERS_0_KSQLDBSERVERAUTH_USERNAME

KSQL DB server's basic authentication username

KAFKA_CLUSTERS_0_KSQLDBSERVERAUTH_PASSWORD

KSQL DB server's basic authentication password

KAFKA_CLUSTERS_0_KSQLDBSERVERSSL_KEYSTORELOCATION

Path to the JKS keystore to communicate to KSQL DB

KAFKA_CLUSTERS_0_KSQLDBSERVERSSL_KEYSTOREPASSWORD

Password of the JKS keystore for KSQL DB

KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL

Security protocol to connect to the brokers. For SSL connection use "SSL", for plaintext connection don't set this environment variable

KAFKA_CLUSTERS_0_SCHEMAREGISTRY

SchemaRegistry's address

KAFKA_CLUSTERS_0_SCHEMAREGISTRYAUTH_USERNAME

SchemaRegistry's basic authentication username

KAFKA_CLUSTERS_0_SCHEMAREGISTRYAUTH_PASSWORD

SchemaRegistry's basic authentication password

KAFKA_CLUSTERS_0_SCHEMAREGISTRYSSL_KEYSTORELOCATION

Path to the JKS keystore to communicate to SchemaRegistry

KAFKA_CLUSTERS_0_SCHEMAREGISTRYSSL_KEYSTOREPASSWORD

Password of the JKS keystore for SchemaRegistry

KAFKA_CLUSTERS_0_METRICS_SSL

Enable SSL for Metrics (for PROMETHEUS metrics type). Default: false.

KAFKA_CLUSTERS_0_METRICS_USERNAME

Username for Metrics authentication

KAFKA_CLUSTERS_0_METRICS_PASSWORD

Password for Metrics authentication

KAFKA_CLUSTERS_0_METRICS_KEYSTORELOCATION

Path to the JKS keystore to communicate to metrics source (JMX/PROMETHEUS). For advanced setup, see kafka-ui-jmx-secured.yml

KAFKA_CLUSTERS_0_METRICS_KEYSTOREPASSWORD

Password of the JKS metrics keystore

KAFKA_CLUSTERS_0_SCHEMANAMETEMPLATE

How keys are saved to schemaRegistry

KAFKA_CLUSTERS_0_METRICS_PORT

Open metrics port of a broker

KAFKA_CLUSTERS_0_METRICS_TYPE

Type of metrics retriever to use. Valid values are JMX (default) or PROMETHEUS. If Prometheus, then metrics are read from prometheus-jmx-exporter instead of jmx

KAFKA_CLUSTERS_0_READONLY

Enable read-only mode. Default: false

KAFKA_CLUSTERS_0_KAFKACONNECT_0_NAME

Given name for the Kafka Connect cluster

KAFKA_CLUSTERS_0_KAFKACONNECT_0_ADDRESS

Address of the Kafka Connect service endpoint

KAFKA_CLUSTERS_0_KAFKACONNECT_0_USERNAME

Kafka Connect cluster's basic authentication username

KAFKA_CLUSTERS_0_KAFKACONNECT_0_PASSWORD

Kafka Connect cluster's basic authentication password

KAFKA_CLUSTERS_0_KAFKACONNECT_0_KEYSTORELOCATION

Path to the JKS keystore to communicate to Kafka Connect

KAFKA_CLUSTERS_0_KAFKACONNECT_0_KEYSTOREPASSWORD

Password of the JKS keystore for Kafka Connect

KAFKA_CLUSTERS_0_POLLING_THROTTLE_RATE

Max traffic rate (bytes/sec) that kafka-ui allowed to reach when polling messages from the cluster. Default: 0 (not limited)

KAFKA_CLUSTERS_0_SSL_TRUSTSTORELOCATION

Path to the JKS truststore to communicate to Kafka Connect, SchemaRegistry, KSQL, Metrics

KAFKA_CLUSTERS_0_SSL_TRUSTSTOREPASSWORD

Password of the JKS truststore for Kafka Connect, SchemaRegistry, KSQL, Metrics

TOPIC_RECREATE_DELAY_SECONDS

Time delay between topic deletion and topic creation attempts for topic recreate functionality. Default: 1

TOPIC_RECREATE_MAXRETRIES

Number of attempts of topic creation after topic deletion for topic recreate functionality. Default: 15

DYNAMIC_CONFIG_ENABLED

Allow to change application config in runtime. Default: false.

kafka_internalTopicPrefix

Set a prefix for internal topics. Defaults to "_".

server.reactive.session.timeout

Session timeout. If a duration suffix is not specified, seconds will be used.

Serialization / SerDe

Serialization, deserialization and custom plugins

Kafka-ui supports multiple ways to serialize/deserialize data.

Int32, Int64, UInt32, UInt64

Big-endian 4/8 bytes representation of signed/unsigned integers.

Base64

Base64 (RFC4648) binary data representation. Can be useful in case if the actual data is not important, but exactly the same (byte-wise) key/value should be send.

Hex

binary data representation. Bytes delimiter and case can be configured.

Class name: com.provectus.kafka.ui.serdes.builtin.HexSerde

String

Treats binary data as a string in specified encoding. Default encoding is UTF-8.

Class name: com.provectus.kafka.ui.serdes.builtin.StringSerde

Sample configuration (if you want to overwrite default configuration):

ProtobufFile

Class name: com.provectus.kafka.ui.serdes.builtin.ProtobufFileSerde

Sample configuration:

ProtobufRawDecoder

Deserialize-only serde. Decodes protobuf payload without a predefined schema (like protoc --decode_raw command).

SchemaRegistry

SchemaRegistry serde is automatically configured if schema registry properties set on cluster level. But you can add new SchemaRegistry-typed serdes that will connect to another schema-registry instance.

Class name: com.provectus.kafka.ui.serdes.builtin.sr.SchemaRegistrySerde

Sample configuration:

Setting serdes for specific topics

You can specify preferable serde for topics key/value. This serde will be chosen by default in UI on topic's view/produce pages. To do so, set topicValuesPattern/topicValuesPattern properties for the selected serde. Kafka-ui will choose a first serde that matches specified pattern.

Sample configuration:

Default serdes

You can specify which serde will be chosen in UI by default if no other serdes selected via topicKeysPattern/topicValuesPattern settings.

Sample configuration:

Fallback

If selected serde couldn't be applied (exception was thrown), then fallback (String serde with UTF-8 encoding) serde will be applied. Such messages will be specially highlighted in UI.

Custom pluggable serde registration

You can implement your own serde and register it in kafka-ui application. To do so:

  1. Add kafka-ui-serde-api dependency (should be downloadable via maven central)

  2. Implement com.provectus.kafka.ui.serde.api.Serde interface. See javadoc for implementation requirements.

  3. Pack your serde into uber jar, or provide directory with no-dependency jar and it's dependencies jars

Example pluggable serdes : ,

Sample configuration:

kafka:
  clusters:
    - name: Cluster1
      # Other Cluster configuration omitted ... 
      serde:
        - name: HexWithEditedDelimiter
          className: com.provectus.kafka.ui.serdes.builtin.HexSerde
          properties:
            uppercase: "false"
            delimiter: ":"
kafka:
  clusters:
    - name: Cluster1
      # Other Cluster configuration omitted ... 
      serde:
          # registering String serde with custom config
        - name: AsciiString
          className: com.provectus.kafka.ui.serdes.builtin.StringSerde
          properties:
            encoding: "ASCII"
        
          # overriding build-it String serde config   
        - name: String 
          properties:
            encoding: "UTF-16"
kafka:
  clusters:
    - name: Cluster1
      # Other Cluster configuration omitted ... 
      serde:
        - name: ProtobufFile
          properties:
            # protobufFilesDir specifies root location for proto files (will be scanned recursively)
            # NOTE: if 'protobufFilesDir' specified, then 'protobufFile' and 'protobufFiles' settings will be ignored
            protobufFilesDir: "/path/to/my-protobufs"
            # (DEPRECATED) protobufFile is the path to the protobuf schema. (deprecated: please use "protobufFiles")
            protobufFile: path/to/my.proto
            # (DEPRECATED) protobufFiles is the location of one or more protobuf schemas
            protobufFiles:
              - /path/to/my-protobufs/my.proto
              - /path/to/my-protobufs/another.proto
            # protobufMessageName is the default protobuf type that is used to deserialize
            # the message's value if the topic is not found in protobufMessageNameByTopic.    
            protobufMessageName: my.DefaultValType
            # default protobuf type that is used for KEY serialization/deserialization
            # optional
            protobufMessageNameForKey: my.Type1
            # mapping of topic names to protobuf types, that will be used for KEYS  serialization/deserialization
            # optional
            protobufMessageNameForKeyByTopic:
              topic1: my.KeyType1
              topic2: my.KeyType2
            # default protobuf type that is used for VALUE serialization/deserialization
            # optional, if not set - first type in file will be used as default
            protobufMessageName: my.Type1
            # mapping of topic names to protobuf types, that will be used for VALUES  serialization/deserialization
            # optional
            protobufMessageNameByTopic:
              topic1: my.Type1
              "topic.2": my.Type2
kafka:
  clusters:
    - name: Cluster1
      # this url will be used by "SchemaRegistry" by default
      schemaRegistry: http://main-schema-registry:8081
      serde:
        - name: AnotherSchemaRegistry
          className: com.provectus.kafka.ui.serdes.builtin.sr.SchemaRegistrySerde
          properties:
            url:  http://another-schema-registry:8081
            # auth properties, optional
            username: nameForAuth
            password: P@ssW0RdForAuth
        
          # and also add another SchemaRegistry serde
        - name: ThirdSchemaRegistry
          className: com.provectus.kafka.ui.serdes.builtin.sr.SchemaRegistrySerde
          properties:
            url:  http://another-yet-schema-registry:8081
kafka:
  clusters:
    - name: Cluster1
      serde:
        - name: String
          topicKeysPattern: click-events|imp-events
        
        - name: Int64
          topicKeysPattern: ".*-events"
        
        - name: SchemaRegistry
          topicValuesPattern: click-events|imp-events
kafka:
  clusters:
    - name: Cluster1
      defaultKeySerde: Int32
      defaultValueSerde: String
      serde:
        - name: Int32
          topicKeysPattern: click-events|imp-events
kafka:
  clusters:
    - name: Cluster1
      serde:
        - name: MyCustomSerde
          className: my.lovely.org.KafkaUiSerde
          filePath: /var/lib/kui-serde/my-kui-serde.jar
          
        - name: MyCustomSerde2
          className: my.lovely.org.KafkaUiSerde2
          filePath: /var/lib/kui-serde2
          properties:
            prop1: v1
Hexadecimal
kafka-smile-serde
kafka-glue-sr-serde

RBAC (Role based access control)

Role-based access control

In this article, we'll guide how to set up Kafka-UI with role-based access control.

Authentication methods

First of all, you'd need to set up authentication method(s). Refer to this article for OAuth2 setup.

Config placement

First of all, you have to decide if either:

  1. You wish to store all roles in a separate config file

  2. Or within a main config file

This is how you include one more file to start with a docker-compose example:

services:
  kafka-ui:
    container_name: kafka-ui
    image: provectuslabs/kafka-ui:latest
    environment:
      KAFKA_CLUSTERS_0_NAME: local
      # other properties, omitted
      SPRING_CONFIG_ADDITIONAL-LOCATION: /roles.yml
    volumes:
      - /tmp/roles.yml:/roles.yml

Alternatively, you can append the roles file contents to your main config file.

Roles file structure

Clusters

In the roles file we define roles, duh. Every each role has access to defined clusters:

rbac:
  roles:
    - name: "memelords"
      clusters:
        - local
        - dev
        - staging
        - prod

Subjects

A role also has a list of subjects which are the entities we will use to assign roles to. They are provider-dependant, in general, they can be users, groups, or some other entities (github orgs, google domains, LDAP queries, etc.) In this example we define a role memelords that will contain all the users within the Google domain memelord.lol and, additionally, a GitHub user Haarolean. You can combine as many subjects as you want within a role.

    - name: "memelords"
      subjects:
        - provider: oauth_google
          type: domain
          value: "memelord.lol"
        - provider: oauth_github
          type: user
          value: "Haarolean"

Providers

A list of supported providers and corresponding subject fetch mechanism:

  • oauth_google: user, domain

  • oauth_github: user, organization

  • oauth_cognito: user, group

  • ldap: group

  • ldap_ad: (unsupported yet, will do in 0.8 release)

Find the more detailed examples in a full example file lower.

Permissions

The next thing which is present in your roles file is, surprisingly, permissions. They consist of:

  1. Resource Can be one of the: CLUSTERCONFIG, TOPIC, CONSUMER, SCHEMA, CONNECT, KSQL, ACL.

  2. The resource value is either a fixed string or a regular expression identifying a resource. Value is not applicable to clusterconfig and ksql resources. Please do not fill it out.

  3. Actions It's a list of actions (the possible values depend on the resource, see the lists below) that will be applied to the certain permission. Also, note, there's a special action for any of the resources called "all", it will virtually grant all the actions within the corresponding resource. An example for enabling viewing and creating topics whose name start with "derp":

      permissions:
        - resource: topic
          value: "derp.*"
          actions: [ VIEW, CREATE ]

Actions

A list of all the actions for the corresponding resources (please note neither resource nor action names are case-sensitive):

  • applicationconfig: view, edit

  • clusterconfig: view, edit

  • topic: view, create, edit, delete, messages_read, messages_produce, messages_delete

  • consumer: view, delete, reset_offsets

  • schema: view, create, delete, edit, modify_global_compatibility

  • connect: view, edit, create, restart

  • ksql: execute

  • acl: view, edit

Example file

A complete file example:

rbac:
  roles:
    - name: "memelords"
      clusters:
        - local
        - dev
        - staging
        - prod
      subjects:
        - provider: oauth_google
          type: domain
          value: "memelord.lol"
        - provider: oauth_google
          type: user
          value: "[email protected]"

        - provider: oauth_github
          type: organization
          value: "memelords_team"
        - provider: oauth_github
          type: user
          value: "memelord"

        - provider: oauth_cognito
          type: user
          value: "username"
        - provider: oauth_cognito
          type: group
          value: "memelords"

        - provider: ldap
          type: group
          value: "admin_staff"

        - provider: ldap_ad # NOT YET SUPPORTED, SEE ISSUE 3741
          type: user
          value: "cn=germanosin,dc=planetexpress,dc=com"

      permissions:
        - resource: applicationconfig
          # value not applicable for applicationconfig
          actions: [ "view", "edit" ] # can be with or without quotes
      
        - resource: clusterconfig
          # value not applicable for clusterconfig
          actions: [ "view", "edit" ] 

        - resource: topic
          value: "ololo.*"
          actions: # can be a multiline list
            - VIEW # can be upper or lowercase
            - CREATE
            - EDIT
            - DELETE
            - MESSAGES_READ
            - MESSAGES_PRODUCE
            - MESSAGES_DELETE

        - resource: consumer
          value: "\_confluent-ksql.*"
          actions: [ VIEW, DELETE, RESET_OFFSETS ]

        - resource: schema
          value: "blah.*"
          actions: [ VIEW, CREATE, DELETE, EDIT, MODIFY_GLOBAL_COMPATIBILITY ]

        - resource: connect
          value: "local"
          actions: [ view, edit, create ]
        # connectors selector not implemented yet, use connects
        #      selector:
        #        connector:
        #          name: ".*"
        #          class: 'com.provectus.connectorName'

        - resource: ksql
          # value not applicable for ksql
          actions: [ execute ]

        - resource: acl
          # value not applicable for acl
          actions: [ view, edit ]

A read-only setup:

rbac:
  roles:
    - name: "readonly"
      clusters:
        # FILL THIS
      subjects:
        # FILL THIS
      permissions:
        - resource: clusterconfig
          actions: [ "view" ]

        - resource: topic
          value: ".*"
          actions: 
            - VIEW
            - MESSAGES_READ

        - resource: consumer
          value: ".*"
          actions: [ view ]

        - resource: schema
          value: ".*"
          actions: [ view ]

        - resource: connect
          value: ".*"
          actions: [ view ]

        - resource: acl
          actions: [ view ]

An admin-group setup example:

rbac:
  roles:
    - name: "admins"
      clusters:
        # FILL THIS
      subjects:
        # FILL THIS
      permissions:
        - resource: applicationconfig
          actions: all
      
        - resource: clusterconfig
          actions: all

        - resource: topic
          value: ".*"
          actions: all

        - resource: consumer
          value: ".*"
          actions: all

        - resource: schema
          value: ".*"
          actions: all

        - resource: connect
          value: ".*"
          actions: all

        - resource: ksql
          actions: all
          
        - resource: acl
          actions: [ view ]

Common problems

Login module control flag not specified in JAAS config

If you are running against confluent cloud and you have specified correctly the jass config and still continue getting these errors look to see if you are passing confluent. the license in the connector, the absence of a license returns a number of bogus errors like "Login module control flag not specified in JAAS config".

https://docs.confluent.io/platform/current/connect/license.html

A good resource for what properties are needed is here: https://gist.github.com/rmoff/49526672990f1b4f7935b62609f6f567

Cluster authorization failed

Check the required permissions.

AWS MSK w/ IAM: Access denied

https://github.com/provectus/kafka-ui/discussions/1104#discussioncomment-1656843 https://github.com/provectus/kafka-ui/discussions/1104#discussioncomment-2963449 https://github.com/provectus/kafka-ui/issues/2184#issuecomment-1198506124

AWS MSK: TimeoutException

Thanks to ahlooli#2666 on discord:

  1. Create a secret in AWS secret manager store that contains key:value pair with 1 username and 1 password, there are certain rules to be followed like the name of the secret (eg. need to start with MSK_ **), so need to refer back to AWS documentation.

  2. Proceed to MSK console and create the MSK cluster, my cluster was the "provisioned" type. Then choose SASL/SCRAM. Another option also needs to follow documentation for your preferred configuration.

  3. After the Cluster has been created, you can then proceed to associate the Secrets created earlier to MSK cluster. (Done in MSK Console)

  4. Then we can proceed to create a custom security group that allows port 9096 (or whichever MSK broker is using). Rough idea:

    1. Source: ESK cluster security group

    2. Type: TCP

    3. Port: 9096

  5. Find out all the MSK's broker ENI. Proceed to attach the above sec. group to each ENI. (if you have 3 brokers which means you have 3 Eni, you need to do it manually 1 by 1)

At this stage, the AWS side should have sufficient permission to allow KAFKA-UI to communicate with it.

DataBufferLimitException: Exceeded limit on max bytes to buffer

Increase webclient.max-in-memory-buffer-size property value. Default value is 20MB.

URLs are invalid/contain ports when behind a reverse proxy

Add the following property server.forward-headers-strategy=FRAMEWORK

Basic Authentication

Basic username+password authentication

In order to enable basic username+passworda authentication add these properties:

      AUTH_TYPE: "LOGIN_FORM"
      SPRING_SECURITY_USER_NAME: admin
      SPRING_SECURITY_USER_PASSWORD: pass

Please note that basic auth is not compatible with neither any other auth method nor RBAC.