Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

How to run Kafka on Openshift, the enterprise Kubernetes, with AMQ Streams

October 29, 2018
Hugo Guerrero
Related topics:
ContainersEvent-DrivenMicroservices
Related products:
Streams for Apache Kafka

Share:

    On October 25th Red Hat announced the general availability of their AMQ Streams Kubernetes Operator for Apache Kafka. Red Hat AMQ Streams focuses on running Apache Kafka on Openshift providing a massively-scalable, distributed, and high performance data streaming platform. AMQ Streams, based on the Apache Kafka and Strimzi projects, offers a distributed backbone that allows microservices and other applications to share data with extremely high throughput. This backbone enables:

    • Publish and subscribe: Many to many dissemination in a fault tolerant, durable manner.
    • Replayable events: Serves as a repository for microservices to build in-memory copies of source data, up to any point in time.
    • Long-term data retention: Efficiently stores data for immediate access in a manner limited only by disk space.
    • Partition messages for more horizontal scalability: Allows for organizing messages to maximum concurrent access.

    One of the most requested items from developers and architects is how to get started with a simple deployment option for testing purposes. In this guide we will use Red Hat Container Development Kit, based on minishift, to start an Apache Kafka cluster on Kubernetes.

    To set up a Kafka cluster on Openshift from scratch, follow the steps below or watch this short video:

    https://vimeo.com/middlewarepro/amq-streams-getting-started

    https://vimeo.com/302934145

     

    Setup Red Hat Container Development Kit (CDK)

    1. Setup  CDK (minishift) on your laptop if you haven’t done that before.
      1. Download CDK.
      2. Follow the Hello World to install and configure CDK.
    2. The latest version of CDK leverages the concept of profiles, so we will using them to avoid changing other configurations. Create a new streams profile:

    $ minishift profile set streams

    1. Configure the system requirements (we recommend 8GB and at least 2 vCPUs available to run smoothly) in this new profile:

    $ minishift config set cpus 2
    $ minishift config set memory 8192

    1. In my case I use VirtualBox as the VM driver, substitute whichever hypervisor you are using:

    $ minishift config set vm-driver virtualbox

    1. Because of the Zookeeper dependencies on users, we will need to remove the anyuid add-on that comes out-of-the-box for CDK:

    $ minishift addons disable anyuid

    NOTE: This is a CRITICAL step if you are running CDK. If the add-on is not disabled you'll get an error when trying to start the Zookeeper TLS sidecar.

    1. Start the CDK environment

    $ minishift start

    You will see the following output if everything worked fine:

    OpenShift server started.
    The server is accessible via web console at:
    
        https://192.168.99.100:8443
    
    You are logged in as:
    
        User:     developer
        Password: <any value>
    
    To login as administrator:
    
        oc login -u system:admin
    
    -- Applying addon 'xpaas':..
    XPaaS imagestream and templates for OpenShift installed
    See https://github.com/openshift/openshift-ansible/tree/release-3.10/roles/openshift_examples/files/examples/v3.10
    -- Applying addon 'admin-user':..
    -- Exporting of OpenShift images is occuring in background process with pid 35470.

    Setup AMQ Streams

    1. Download the Red Hat AMQ Streams installation and example resources from the Red Hat Customer Portal.
    2. Navigate to the unzipped folder to get access to the yaml files
    3. Unzip the downloaded install_and_examples_0.zip file.

    $ cd <your_download_folder>/install_and_examples_0

    1. Login in to the OpenShift cluster with admin privileges:

    $ oc login -u system:admin

    1. Apply the customer resource definitions (CRDs) and role bindings required to manage the CRDs.

    $ oc apply -f install/cluster-operator/

    1. The last step will create the Kafka CRD and start the deployment of the Cluster Operator. This operator will keep track of your kafka resources and provision or update the changes to those resources. Open a new browser tab and navigate to your web console URL:

    https://<your-ip>:8443/console/project/myproject/overview

    Check the assigned IP issuing the minishift ip command or just run  minishift console and navigate to My Project.

    1. Login in to the OpenShift web console to check the deployment. Use developer/developer as the user and password. If you haven’t done before, accept the self signed certificates in your browser.
    2. You will see in the project workspace the new deployed Cluster Operator running.
    3.  

    Setup your first Apache Kafka Cluster

    The Cluster Operator now will listen for new Kafka resources. Let’s create a simple Kafka cluster with external access configured, so we are able to connect from outside the OpenShift cluster.

    1. Create the new my-cluster kafka Cluster with 3 zookeeper and 3 kafka nodes using ephemeral storage:
      $ cat << EOF | oc create -f -
      apiVersion: kafka.strimzi.io/v1alpha1
      kind: Kafka
      metadata: 
       name: my-cluster
      spec:
       kafka:
         replicas: 3
         listeners:
           external:
             type: route
         storage:
           type: ephemeral
       zookeeper:
         replicas: 3
         storage:
           type: ephemeral
       entityOperator:
         topicOperator: {}
      EOF
        1.  
    2. Wait a couple minutes, after that you will see the deployment of the Zookeeper and Kafka resources as well as the topic operator.
    3. Now that our cluster is running, we can create a topic to publish and subscribe from our external client. Create the following my-topic Topic custom resource definition with 3 replicas and 3 partitions in my-cluster Kafka cluster:
    $ cat << EOF | oc create -f -
    apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaTopic
    metadata:
     name: my-topic
     labels:
       strimzi.io/cluster: "my-cluster"
    spec:
     partitions: 3
     replicas: 3
    EOF

    You are now ready to start sending and receiving messages.

    Test using an external application

    1. Clone this git repo to test the access from to your new Kafka cluster:

    $ git clone https://github.com/hguerrero/amq-examples.git

    1. Switch to the camel-kafka-demo folder

    $ cd amq-examples/camel-kafka-demo/

    1. As we are using Routes for external access to the cluster, we need the CA certs to enable TLS in the client. Extract the public certificate of the broker certification authority

    $ oc extract secret/my-cluster-cluster-ca-cert --keys=ca.crt --to=- > src/main/resources/ca.crt

    1. Import the trusted cert to a keystore

    $ keytool -import -trustcacerts -alias root -file src/main/resources/ca.crt -keystore src/main/resources/keystore.jks -storepass password -noprompt

    1. Now you can run the Fuse application using the maven command:

    $ mvn -Drun.jvmArguments="-Dbootstrap.server=`oc get routes my-cluster-kafka-bootstrap -o=jsonpath='{.status.ingress[0].host}{"\n"}'`:443" clean package spring-boot:run

    After finishing the clean and package phases you will see the Spring Boot application start creating a producer and consumer sending and receiving messages from the “my-topic” Kafka topic.

    14:36:18.170 [main] INFO  com.redhat.kafkademo.Application - Started Application in 12.051 seconds (JVM running for 12.917)
    14:36:18.490 [Camel (MyCamel) thread #1 - KafkaConsumer[my-topic]] INFO  o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=6de87ffa-c7cf-441b-b1f8-e55daabc8d12] Discovered coordinator my-cluster-kafka-1-myproject.192.168.99.100.nip.io:443 (id: 2147483646 rack: null)
    14:36:18.498 [Camel (MyCamel) thread #1 - KafkaConsumer[my-topic]] INFO  o.a.k.c.c.i.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=6de87ffa-c7cf-441b-b1f8-e55daabc8d12] Revoking previously assigned partitions []
    14:36:18.498 [Camel (MyCamel) thread #1 - KafkaConsumer[my-topic]] INFO  o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=6de87ffa-c7cf-441b-b1f8-e55daabc8d12] (Re-)joining group
    14:36:19.070 [Camel (MyCamel) thread #3 - KafkaProducer[my-topic]] INFO  producer-route - producer >>> Hello World from camel-context.xml with ID ID-hguerrer-osx-1540578972584-0-2
    14:36:19.987 [Camel (MyCamel) thread #4 - KafkaProducer[my-topic]] INFO  producer-route - producer >>> Hello World from camel-context.xml with ID ID-hguerrer-osx-1540578972584-0-4
    14:36:20.982 [Camel (MyCamel) thread #5 - KafkaProducer[my-topic]] INFO  producer-route - producer >>> Hello World from camel-context.xml with ID ID-hguerrer-osx-1540578972584-0-6
    14:36:21.620 [Camel (MyCamel) thread #1 - KafkaConsumer[my-topic]] INFO  o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=6de87ffa-c7cf-441b-b1f8-e55daabc8d12] Successfully joined group with generation 1
    14:36:21.621 [Camel (MyCamel) thread #1 - KafkaConsumer[my-topic]] INFO  o.a.k.c.c.i.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=6de87ffa-c7cf-441b-b1f8-e55daabc8d12] Setting newly assigned partitions [my-topic-0, my-topic-1, my-topic-2]
    14:36:21.959 [Camel (MyCamel) thread #6 - KafkaProducer[my-topic]] INFO  producer-route - producer >>> Hello World from camel-context.xml with ID ID-hguerrer-osx-1540578972584-0-8
    14:36:21.973 [Camel (MyCamel) thread #1 - KafkaConsumer[my-topic]] INFO  consumer-route - consumer >>> Hello World from camel-context.xml with ID ID-hguerrer-osx-1540578972584-0-8
    14:36:22.970 [Camel (MyCamel) thread #7 - KafkaProducer[my-topic]] INFO  producer-route - producer >>> Hello World from camel-context.xml with ID ID-hguerrer-osx-1540578972584-0-11
    14:36:22.975 [Camel (MyCamel) thread #1 - KafkaConsumer[my-topic]] INFO  consumer-route - consumer >>> Hello World from camel-context.xml with ID ID-hguerrer-osx-1540578972584-0-11
    14:36:23.968 [Camel (MyCamel) thread #8 - KafkaProducer[my-topic]] INFO  producer-route - producer >>> Hello World from camel-context.xml with ID ID-hguerrer-osx-1540578972584-0-14
    
    

    You’re done! Press Ctrl + C to stop the running program.

    You've seen how easy it is to create an Apache Kafka cluster in OpenShift and be ready to have your applications send and consume messages using it. You can find more information in the official getting started guide if you want to check more advanced configurations.

    Soon, I will publish an another how to configuring configuring Kafka Connect and Kafka Streams with OpenShift and AMQ Streams.

    OSZAR »
    Last updated: December 21, 2021

    Recent Posts

    • Getting started with RHEL on WSL

    • llm-d: Kubernetes-native distributed inferencing

    • LLM Semantic Router: Intelligent request routing for large language models

    • What is the Red Hat Advanced Developer Suite? An overview

    • Optimize LLMs with LLM Compressor in Red Hat OpenShift AI

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue

    OSZAR »