Skip to main content
Version: Mosquitto 2.8


Cedalo offers different Mosquitto broker configurations that you can deploy starting from single-node solutions to High Availability Cluster using docker. We also now introduce the Openshift support for Mosquitto. Openshift support for Mosquitto is an extension to Kubernetes support for mosquitto and hence most of deployment strategies would be quite similar to what we had in Kubernetes setup.


OpenShift is a cloud-based Kubernetes container platform that offers automated installation, upgrades, and lifecycle management throughout the container stack—the operating system, Kubernetes and cluster services, and applications—on any cloud. Developed by Red Hat, it provides a powerful environment for deploying applications in containers, enabling developers to automate the provisioning, management, and scaling of applications. OpenShift extends Kubernetes with additional features like integrated developer tools, a web console, monitoring, and additional security features, making it easier for developers and operations teams to develop, deploy, and manage applications consistently across various environments. It supports multiple languages and frameworks, streamlining the development process and facilitating DevOps practices. As Openshift is build on top of Kubernetes, lets discuss Kubernetes briefly as well. Kubernetes simplifies Mosquitto broker deployment by providing automated scaling, and easy updates. Our main objective to offer Kubernetes support for Mosquitto to add support for manual scaling, automated scaling, easier deployment and orchestration of Mosquitto resources and of course reaching a wider audience by giving access to various tech stacks. Its container orchestration streamlines management and ensures high availability, making it ideal for IoT and real-time messaging applications. Kubernetes abstracts infrastructure complexities, allowing you to focus on building robust MQTT services efficiently. Kubernetes, the go-to solution for container orchestration, offers a streamlined approach to deploying and managing applications. In this article, we'll explore some of the tools that we would be using to deploy Kubernetes setup in different versions of Mosquitto deployment.

Different Versions of Mosquitto Configuration deployment for Openshift:

  • High Availability (3 -Mosquitto broker, 1 Management-Center and support to add further brokers manually)
  • High Availability with Autoscaling (1 Management Center and on-demand Mosquitto brokers)

Tools we would be discussing here are as follows:

  • Helm Charts (For all versions)
  • Openshift OKD

Helm Charts

Helm is a package manager for Kubernetes that simplifies application deployment, management, and scaling. Helm charts are predefined packages that encapsulate an application's configuration, dependencies, and runtime requirements. These charts allow you to deploy complex applications like the Mosquitto broker effortlessly. You can refer to further details about helm charts on the official documentation page. A simple helm charts deployment could be something like this: helm install my-mosquitto ./mosquitto-chart

Openshift OKD

OKD is the Community Distribution of Kubernetes that powers Red Hat OpenShift. Essentially, it's the open-source version of OpenShift, providing a similar container platform built around Docker containers orchestrated and managed by Kubernetes on a foundation of Red Hat Enterprise Linux. OKD integrates the innovations of the Kubernetes community with additional features and tools to create a more comprehensive, enterprise-ready platform for building, deploying, and managing containerized applications. It offers developers and operators a secure and scalable cloud application platform, including a rich set of command-line tools, a web console, and multi-tenancy support, without tying them to a specific cloud provider. OKD aims to simplify the DevOps process by providing a unified platform that accelerates the development lifecycle while maintaining security and stability.

Further common Kubernetes concepts that we would using in this setup:

  • Deployments (MMC and HA)
  • Statefulsets (Mosquitto)
  • Services (Mosquitto, MMC and HA)


  • Use Case: Deployments are primarily used for managing stateless applications, where instances of the application can be easily replaced or scaled up/down without regard to their identity.
  • Scaling: Deployments provide easy scaling of applications horizontally by creating or removing replicas.
  • Rolling Updates: Deployments support rolling updates and rollbacks, making it easy to update the application to a new version or configuration while ensuring zero downtime.
  • Pod Identity: Pods managed by Deployments do not have stable network identities or storage. They are disposable, and Kubernetes may reschedule them to different nodes.


  • Use Case: StatefulSets are designed for managing stateful applications, such as databases, where each pod has a stable and unique identity and may require ordered, sequential scaling or rolling updates.
  • Pod Identity: Pods managed by StatefulSets are assigned a stable hostname and storage identity. They are often used for distributed systems where pod identity is critical for operations like failover, sharding, and replication.
  • Scaling: StatefulSets support ordered scaling. You can specify how many replicas you want, and they are created in order, ensuring predictable naming and network identities.
  • Rolling Updates: StatefulSets also support rolling updates, but they are designed to handle the complexity of updating stateful applications while maintaining their unique identities.

In summary, Deployments are suitable for stateless applications, offering easy scaling and rolling updates. StatefulSets, on the other hand, are designed for stateful applications that require stable identities and predictable, ordered scaling. The choice between the two depends on the specific requirements of your application. StatefulSets are essential for scenarios like running databases in Kubernetes, while Deployments are more suitable for typical web services and microservices.


In Kubernetes, a "Service" is an abstraction that defines a logical set of pods and a policy by which to access them. It acts as a stable endpoint for communication, allowing other services or external users to interact with applications running within a Kubernetes cluster without needing to be aware of the specific details of how those applications are deployed or scaled.

Here are the key aspects of Kubernetes Services:

  • Load Balancing: Services provide load balancing across a set of pods. They ensure that network traffic is distributed fairly and efficiently among the pods that belong to the service. This load balancing is crucial for high availability and scalability.

  • Stable Endpoint: Each service is assigned a stable IP address or DNS name, which doesn't change even if pods come and go due to scaling or failures. This stable endpoint simplifies the process of connecting to the application.

  • Label-Based Selection: Services select pods based on labels and label selectors, making it easy to specify which pods should be part of the service. This label-based selection allows dynamic membership management.

    Types of Services:

  • ClusterIP: The default service type, accessible only within the cluster.

  • NodePort: Exposes the service on each node's IP at a static port on the node. This makes the service accessible externally at the specified port.

  • LoadBalancer: Provides an external IP address that routes traffic to the service. This is primarily used when running Kubernetes in a cloud environment that supports load balancers.

  • ExternalName: Maps the service to a DNS name. It doesn't create a cluster IP or load balancer. It's used to make services accessible as external names, often pointing to resources outside the cluster.

  • Service Discovery: Services enable easy service discovery within the Kubernetes cluster. Other pods or services can discover and connect to the service using its DNS name or IP address.

  • Headless Service: You can create a "headless" service, which doesn't provide a cluster IP but instead allows direct DNS-based communication with individual pods, useful for certain specialized use cases like StatefulSets.

Kubernetes Services are a fundamental component for building scalable and resilient applications within a cluster. They abstract the network and routing complexities, allowing developers and operators to focus on the application logic while ensuring reliable communication and load distribution. We would be using different services for different types of applications:

  • Headless Service (Mosquitto)
  • NodePort: (MMC and HA)