Docker Image

The official Lenses docker image is available at the docker hub under landoop/lenses. The build is automated and the source code is available at It can be used instead of the Lenses archive or Lenses helm chart —for which it serves as the base. Support is available via our public channels on a best effort basis. Enterprise customers get priority support under their contract’s SLA and private communication options.

The free Lenses Box docker image, which includes a complete Kafka setup for development, is available under landoop/kafka-lenses-dev and its source code is available at github/landoop/fast-datadev. For more information please check the Lenses Box section.

Lenses Settings

The image uses the standard practice of converting environment variables to configuration options for Lenses. The convention is that letters are uppercase in environment variables and lowercase in Lenses configuration options, whilst underscores in environment variables translate to dots in configuration options. Only options starting with LENSES_ get processed.

Some examples include:

  • Configuration option lenses.port would be set via environment variable LENSES_PORT
  • Configuration option lenses.schema.registry.urls would be set via environment variable LENSES_SCHEMA_REGISTRY_URLS

Necessary configuration options are:


If LDAP is used instead of the basic security mode, then LENSES_SECURITY_USERS should be replaced with the options for LDAP setup.

Other important configuration options include:


More information may be found in the configuration section.

Optionally separate settings can be mounted as volumes under /mnt/settings or /mnt/secrets. Mounting a whole Lenses configuration file or a safety valve to be appended to the auto-generated one, is also supported. For more information about these methods and various quirks of environment variable based configuration as well as secret management, please continue reading below.

License File

Lenses need your license file in order to work. If you don’t have one you may request a trial license or contact us for further information.

The license file may be provided to the docker image via three methods:

  1. As a file, mounted at /license.json or /mnt/secrets/license.json
  2. As the contents of the environment variable LICENSE
  3. As a URL resource that will download on container startup via LICENSE_URL


Lenses stores its data within Kafka, so docker volumes can be avoided if desired.

The docker image exposes two volumes, /data/logs and /data/kafka-streams-state which can be used if desired.

The former (/data/logs) is where Lenses logs are stored. The software also logs into stdout, so your existing log management solutions can be used.

The latter (/data/kafka-streams-state) is created when using LSQL in IN_PROC mode, that is LSQL queries run within Lenses. In such case, Lenses takes advantage of this scratch directory to cache LSQL internal state. Whilst this directory can safely be removed, it can be beneficial to keep it around, so Lenses won’t have to rebuild the cache during a restart.

Process UID/GID

Lenses docker does not require running as root.

The default user in the image is set to root for convenience. Upon start, the initialization script will use the root privileges to make sure all directories and files have the correct permissions, then drop to user nobody and group nogroup (65534:65534) before starting Lenses.

If the image is started without root privileges, Lenses will start successfully under the effective uid:gid applied. In such case and if volumes are used —for the license, settings or data—, it is the responsibility of the operator to make sure that Lenses have permission to access these.

Other means of Configuration

Lenses configuration options may be mount as files under /mnt/settings and /mnt/secrets. The latter is usually used for options that contain secrets, such as LENSES_SECURITY_USERS and in conjunction with the underlying container orchestrator system’s secret management —such as kubernetes secrets.

For this functionality, a file with the name of the option’s environment variable and content its value must be used. As an example, to set lenses.port=9991, one would mount a file under /mnt/settings/LENSES_PORT with content 9991.

If the traditional configuration approach with files lenses.conf and security.conf is desired instead, they should be mounted under either directory, such as /mnt/settings/lenses.conf and /mnt/secrets/security.conf. Special care is needed for options lenses.secret.file and lenses.license.file which point to the license and secrets configuration files. It is advised to omit them and the initialization script will take care to append them correctly to the provided configuration files. If not omited, it is the responsibility of the operator to set them correctly under the paths these files are mounted. When the traditional configuration files are used, environment variables are not processed.

A hybrid approach, mixing configuration via environment variables but also files is supported as well via the files /mnt/settings/lenses.append.conf and /mnt/settings/security.append.conf. The contents of these files will be appended to lenses.conf and security.conf respectively after the environment variables are processed —and thus take priority.

JAVA Settings

Java and JVM settings may be set as described in java options configuration section. The most commonly used setting is LENSES_HEAP_OPTS which restricts the memory usage of Lenses. The default value is -Xmx3g -Xms512m which permits Lenses to use as much as 3GB of memory for heap space.

Cloud Service Discovery

In version 2.0 service discovery came into the Lenses docker as a preview feature.

Traditionally, except for the brokers, all other service and jmx endpoints —for Zookeeper, Kafka Connect and Schema Registry— should be explicitly provided to Lenses. This can be cumbersome for a larger cluster or dynamically deployed clusters.

The service discovery feature can help detect the various services endpoints automatically via the metadata services provided in widely used cloud providers, such as Amazon AWS, Google Cloud, Microsoft Azure, DigitalOcean, OpenStack, Aliyun Cloud, Scaleway and SoftLayer. The discovery relies on instances tags to work with.

A list of the available options follow. Options with default values may be omited when the default value corresponds to the correct setup value:

Variable Description Default Required
Service discovery configuration. Please look
at go-discovery and the examples below
Filter for Brokers. Please look at
go-discovery and the examples below
when broker discovery is required
SD_BROKER_PORT Broker Port 9092 no
SD_BROKER_PROTOCOL Broker Protocol to use PLAINTEXT no
Filter for Zookeeper nodes. Please look
at go-discovery and the examples below
when zookeeper discovery is required
SD_ZOOKEEPER_PORT Zookeeper Port 2181 no
Filter for Schema Registries. Please look at
go-discovery and the examples below
when schema registry discovery is required
SD_REGISTRY_PORT Schema Registry Port 8081 no
SD_REGISTRY_JMX_PORT Schema Registry JMX Port no
Comma-separated filters for connect clusters’ workers.
Please look at go-discovery and the examples below
when connect worker (of one or more connect
distributed clusters) is required
SD_CONNECT_NAMES Comma-separated names of connect clusters only if more than one clusters must be discovered
SD_CONNECT_PORTS Comma-separated connect workers’ ports 8083 no
SD_CONNECT_JMX_PORTS Comma-separated connect workers’ JMX ports no
SD_CONNECT_CONFIGS Comma-separated names of connect configs topic connect-configs only if more than one clusters must be discovered
SD_CONNECT_OFFSETS Comma-separated names of connect offsets topic connect-offsets only if more than one clusters must be discovered
SD_CONNECT_STATUSES Comma-separated names of connect statuses topic connect-statuses only if more than one clusters must be discovered

Examples of service discovery configuration in various clouds follow.

Amazon AWS setup for brokers, zookeeper nodes, schema registries and one connect distributed cluster without JMX and everything (ports, connect topics, protocol) left at default values. Lenses VM should have the IAM permission ec2:DescribeInstances. The Schema Registry runs in the same instances as Connect. This example would actually work if you used Confluent’s AWS templates to deploy your cluster.

SD_CONFIG=provider=aws region=eu-central-1 addr_type=public_v4

SD_BROKER_FILTER=tag_key=Name tag_value=*broker*

SD_ZOOKEEPER_FILTER=tag_key=Name tag_value=*zookeeper*

SD_REGISTRY_FILTER=tag_key=Name tag_value=*worker*

SD_CONNECT_FILTERS=tag_key=Name tag_value=*worker*

Google Cloud setup for brokers, zookeeper nodes, schema registries and two connect distributed clusters with JMX monitoring and default ports. left at default values. Lenses VM should have the scope

SD_CONFIG=provider=gce zone_pattern=europe-west1.*





DigitalOcean setup for brokers, zookeeper nodes, schema registries and a connect distributed cluster with JMX monitoring, custom ports and SASL_SSL protocol. An read-only API token is needed from DO control panel, in order for service discovery to be able to get a list of running droplets. Private IPv4 Networking should be enabled for the droplets.

SD_CONFIG=provider=digitalocean api_token=[YOUR_API_TOKEN]

SD_BROKER_FILTER=region=lon1 tag_name=broker

SD_ZOOKEEPER_FILTER=region=lon1 tag_name=zookeeper

SD_REGISTRY_FILTER=region=lon1 tag_name=registry

SD_CONNECT_FILTERS=region=lon1 tag_name=connect

Configuration Quirks

Lenses configuration is in HOCON format which at times can be challenging to convert to from other formats, such as configuration via environment variables, especially when these are set via nonstandard channels, such as yaml and docker environment files.

Yaml is well supported and multiline variables are supported. Quotes should be avoided unless they are needed as literals, like in the url and jmx sections of LENSES_ZOOKEEPER_HOSTS. An example follows:

  LENSES_KAFKA_BROKERS: PLAINTEXT://kafka-1:9092,PLAINTEXT://kafka-2:9092,PLAINTEXT://kafka-3:9092
      urls: [
        {"name": "adminGroup", "roles": ["admin", "write", "read"]},
        {"name": "writeGroup", "roles": ["read", "write"]},
        {"name": "readGroup",  "roles": ["read"]},
        {"name": "nodataGroup",  "roles": ["nodata"]}
        {"username": "admin", "password": "admin", "displayname": "Lenses Admin", "groups": ["adminGroup"]},
        {"username": "write", "password": "write", "displayname": "Write User", "groups": ["writeGroup"]},
        {"username": "read", "password": "read", "displayname": "Read Only", "groups": ["readGroup"]},
        {"username": "nodata", "password": "nodata", "displayname": "No Data", "groups": ["nodataGroup"]}

Docker environment files do not support multiline entries. Again, quotes should be used only when literals are expected and avoided in any other case.

SD_CONFIG=provider=gce zone_pattern=europe-west1.*
LENSES_SECURITY_GROUPS=[{"name": "adminGroup", "roles": ["admin", "write", "read"]}, {"name": "writeGroup", "roles": ["read", "write"]}]
LENSES_SECURITY_USERS=[{"username": "admin", "password": "admin", "displayname": "Lenses Admin", "groups": ["adminGroup"]}, {"username": "write", "password": "write", "displayname": "Write User", "groups": ["writeGroup"]}]

Monitoring and Prometheus

Lenses use the JVM, as such they can expose a JMX endpoint where applications can connect to access metrics. To enable the JMX endpoint, please set the environment variable LENSES_JMX_PORT. Depending on your enviroment, additional settings may be needed, such as:

LENSES_JMX_OPTS=" -Djava.rmi.server.hostname=[LENSES_JMX_HOST][LENSES_JMX_PORT]"

A Prometheus endpoint is provided by default, through a jmx_exporter instance that is loaded as agent into Lenses. Its port is 9102 and cannot be altered —but may be exposed under a different port as per docker settings. The java agent is loaded always. It exposes process and kafka client metrics.

It is a common practice when deploying into Kubernetes to expose a liveness endpoint. Lenses docker image does not have a dedicated endpoint but the address of Lenses itself can be used for this purpose.


These examples serve more as a quick reference guide.

Docker-compose example configuration.

version: '2'
    image: landoop/lenses:2.0
      LENSES_PORT: 9991
      LENSES_KAFKA_BROKERS: "PLAINTEXT://broker.1.url:9092,PLAINTEXT://broker.2.url:9092"
          {url:"zookeeper.1.url:2181", jmx:"zookeeper.1.url:9585"},
          {url:"zookeeper.2.url:2181", jmx:"zookeeper.2.url:9585"}
            urls: [
      # Secrets can also be passed as files. Check _examples/
          {"name": "adminGroup", "roles": ["admin", "write", "read"]},
          {"name": "readGroup",  "roles": ["read"]}
          {"username": "admin", "password": "admin", "displayname": "Lenses Admin", "groups": ["adminGroup"]},
          {"username": "read", "password": "read", "displayname": "Read Only", "groups": ["readGroup"]}
      - 9991:9991
      - 9102:9102
      - ./license.json:/license.json
    network_mode: host

A Kubernetes pod and service example is available at More information about running Lenses inside Kubernetes, is available at the kubernetes and helm section.