Skip to main content
Version: Mosquitto 2.9

High Availability Autoscaling

To set up a multi-node-HA Mosquitto broker and Management Center with autoscaling using Helm charts, you'll first need a Kubernetes environment. For deploying a full fledged kubernetes cluster on multiple hosts, Kubeadm is an excellent choice. Kubeadm is a command-line tool in the Kubernetes ecosystem designed to facilitate the process of setting up and bootstrapping a Kubernetes cluster.(Discussed in Introduction section). This setup would deploy a 3 Mosquitto broker as a statefulsets. Also, a Management-Center pod and HA-proxy pod as a deployment entity. All the deployment would be deployed on multiple hosts. This deployment by default uses a NFS server to mount volumes. You would need to setup the NFS server before using this deployment.

Why Auto-scaling ?

When we deploy the Kubernetes setup using the above procedure, by default we start with 3 Mosquitto Pods, 1 MMC and 1 HA. However, we might run into problems if we have a lot of incoming requests and connections causing overload at Mosquitto brokers, especially in DynSec mode. We would want the setup to adjust based on the load to avoid crashes and maintain system requirements and at the same time avoid any need of human monitoring and intervention.

How does Auto-scaling works?

On deploying the above setup we also deploy certain other helper pods that takes care of Auto-scaling. For eg:

  • Metrics Server: This server pod monitors metrics of the deployed applications pods. Metrics could be CPU, Memory etc.
  • Horizontal Pod Scaler (HPA): HPA automatically scales up or down the pods based on the threshold. For eg: If the CPU threshold is set to 60%, and of overall CPU consumption across all pods reaches 60%, HPA scales up the Mosquitto pods.
  • Cluster-operator: This pod keeps tracks of pod scaling and triggers the requests to MQTT APIs so that newly scaled pods gets added to internal cluster of Mosquitto brokers. For eg If the current number of Mosquitto brokers are 3 and it scales to 5, then cluster-operator would send a addNode and joinCluster MQTT request for 2 added nodes. If pod is to be scaled down, then the cluster-operator would send removeNode and leaveCluster MQTT API requests.

Recommended Setup

  1. 1 Control-plane node, 3 worker nodes and a NFS Server
  2. Management center (MMC) is configured to have a node affinity that means the pod for MMC will spawn on a specific worker node. The default configuration expects names of the worker nodes to be node-1, node-2 and node-3. Given the nodes are named in similar fashion, MMC would be spawned on node-1.
  3. If you want to have different names for your nodes you can also do that. You will have to adjust the hostnames of nodes in helm chart so that the MMC node affinity remains intact. To adjust the helm chart you will have to uncompress the helm charts and change the hostnames entries of values.yaml. You can do so using the following command:
    • tar -xzvf mosquitto-multi-node-multi-host-autoscale-0.1.0.tgz
    • cd mosquitto-multi-node-multi-host-autoscale
    • Change the values of hostname from node-1/node-2/node-3 to the names of your machines. For eg, node-1, node-2and node-3 can be renamed as worker-node-1, worker-node-2 and worker-node-3. Doing this, MMC would now be spawned on the node named worker-node-1.
    • Go back to the parent directory: cd ../
    • Package the helm chart to its original form using: helm package mosquitto-multi-node-multi-host-autoscale

HA-PROXY Configurations HA-proxy need to be configured accordingly for the kubernetes setup. For eg server m1, m2 and m3 needs to be configured in this case. You would need to configure more server based on your requirements and based on the number mounts you have created on NFS. The autoscaling setup may scale up and down your deployment, so make sure you setup atleast 6 server entries in your haproxy.cfg. Instead of using docker IP we would use DNS names to address the pods. For eg mosquitto-0.mosquitto.multinode.svc.cluster.local. Here mosquitto-0,mosquitto-1,mosquitto-2 are the name of individual mosquitto pods running as statefulsets. Each new pod would increase its pod-ordinal by 1. Template of the connection endpoints can be defined as follows <pod-name>.<name-of-the-statefulset>.<namespace>.svc.cluster.local Your setup folder comes along with a default configuration of haproxy config which is given below. This assumes that your using namespace name as "multinode". You can also change the namespace name if you want and the procedure to do it would be discussed at a later stage. In the below config, we have configured 6 servers:

global
daemon
maxconn 10000
resolvers kubernetes
nameserver dns1 10.96.0.10:53 # Replace with your Kube DNS IP
resolve_retries 3
timeout retry 1s
hold valid 10s

frontend mqtt_frontend
bind *:1883
mode tcp
default_backend mqtt_backend
timeout client 10m

backend mqtt_backend
timeout connect 5000
timeout server 10m
mode tcp
option redispatch
server m1 mosquitto-0.mosquitto.multinode.svc.cluster.local:1883 check resolvers kubernetes on-marked-down shutdown-sessions
server m2 mosquitto-1.mosquitto.multinode.svc.cluster.local:1883 check resolvers kubernetes on-marked-down shutdown-sessions
server m3 mosquitto-2.mosquitto.multinode.svc.cluster.local:1883 check resolvers kubernetes on-marked-down shutdown-sessions
server m4 mosquitto-3.mosquitto.multinode.svc.cluster.local:1883 check resolvers kubernetes on-marked-down shutdown-sessions
server m5 mosquitto-4.mosquitto.multinode.svc.cluster.local:1883 check resolvers kubernetes on-marked-down shutdown-sessions
server m6 mosquitto-5.mosquitto.multinode.svc.cluster.local:1883 check resolvers kubernetes on-marked-down shutdown-sessions

10.96.0.10 is the Kube-api server IP. We add nameserver so that the HA-proxy do not crash when some of the servers are not available as in autoscaling the pods server may scale up and down.

Kubernetes Cluster Setup

Dependencies and Prerequisites

  • Docker
  • Kubernetes Cluster with Kubeadm
  • Helm

If you need to set up a Kubernetes cluster, you can refer to our installation script. If you plan on using your own cluster, you can skip to step 5. Follow these steps on your master/control-plane node :

  1. Setup the ha-cluster setups folder:

    • Copy or setup the mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling folder to the Control-plane node. Also make sure to create a directory inside the copied folder on Control plane node named license that contains the license.lic file we provided you. So the relative path would be mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling/license/license.lic.
  2. Choose Architecture Folder:

    • Depending on your host architecture, navigate to the corresponding folder:
      • For Debian AMD64:
        cd mosquitto-2.7-mmc-2.7-cluster-kubernetes-autoscaling/kubernetes/multi-node-multi-host-autoscaling/debian_amd64
  3. Install Common Dependencies:

    • Run the following command to install the necessary dependencies on all the nodes(including control-plane node):
      bash common-installation-debian.sh
  4. Install Master Dependencies:

    • Run the following command to install the necessary dependencies on the master node:
      bash  master-installation-debian.sh
  5. Create a namespace

    • On your Control-Plane node: Create a namespace in which you would want to deploy the application. The deployment folder is pre-configured for the namespace named multinode. If you want to use the default configuration you can create a namespace named multinode using the below command:
    • kubectl create namespace multinode
    • If you want to use a different namespace, use the command: kubectl create namespace <your-custom-namespace>. Replace <your-custom-namespace> with the name of the namespace you want to configure.
  6. Create configmap for your license

    • On your Control-Plane node: Create a configmap for your license key (same license you created ). You can create the configmap using the following command:
    • kubectl create configmap mosquitto-license -n <namespace> --from-file=<path-to-your-license-file>
    • Make sure the name of the configmap remains the same as mosquitto-license as this is required by the deployment files and statefulsets.
    • A sample configmap creation command would look something like this if the choosen namespace is multinode and the license file is at the path /root/mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling/license/license.lic :
      • kubectl create configmap mosquitto-license -n multinode --from-file=/root/mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling/license/license.lic.
  7. Setup NFS Server

    • Copy or setup the mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling folder to the NFS-Server.

    • Install necessary dependencies sudo apt-get update sudo apt-get install nfs-kernel-server

    • Configure exports directory.Open the /etc/exports file on NFS-server. Expose the directories so that pods running on other worker nodes can access these directories and mount the volumes.

      • The default starting point of the cluster is with 3 mosquitto broker nodes, however we will configure and expose a total of 6 mosquitto data directories along with a MMC config driectory in the NFS server. As the provisioning of data directories on the NFS servers are not dyanmic at the moment, configuring three extra mosquitto data directories allows the autoscaling to scale up till 6 pods seamlessly.

      • The helm charts therefore is also configured in a fashion that they create total of 7 persistent volumes and persistent volume claims (6 for mosquitto data directories and 1 for MMC). However, only three mosquitto broker would be spinned up by default.

      • You can use the following as a reference. Here we expose six mosquitto nodes and management center and the mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling resides at /root on our NFS server.

         /root/mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling/server1/mosquitto/data *(rw,sync,no_root_squash,no_subtree_check)
        /root/mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling/server2/mosquitto/data *(rw,sync,no_root_squash,no_subtree_check)
        /root/mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling/server3/mosquitto/data *(rw,sync,no_root_squash,no_subtree_check)
        /root/mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling/server4/mosquitto/data *(rw,sync,no_root_squash,no_subtree_check)
        /root/mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling/server5/mosquitto/data *(rw,sync,no_root_squash,no_subtree_check)
        /root/mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling/server6/mosquitto/data *(rw,sync,no_root_squash,no_subtree_check)
        /root/mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling/server1/management-center/config *(rw,sync,no_root_squash,no_subtree_check)
      • Make sure all the data directories have adequate privileges so that mosquitto kubernetes pods can create additional directories inside these data directories. We provide 1000 ownership to all the data directory of mosquitto servers and root ownership to config of management-center. The same ownership are also the default ownership of mosquitto pods and MMC pods.

                sudo chown -R 1000:1000 /root/mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling/server1/mosquitto/data
        sudo chown -R 1000:1000 /root/mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling/server2/mosquitto/data
        sudo chown -R 1000:1000 /root/mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling/server3/mosquitto/data
        sudo chown -R 1000:1000 /root/mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling/server4/mosquitto/data
        sudo chown -R 1000:1000 /root/mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling/server5/mosquitto/data
        sudo chown -R 1000:1000 /root/mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling/server6/mosquitto/data
        sudo chown -R root:root /root/mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling/server1/management-center/config
    • Expose the directories using: sudo exportfs -a

    • Restart the kernel-server sudo systemctl restart nfs-kernel-server

    • Install neccessary "nfs-common" library on other nodes so that they act as nfs-clients sudo apt-get install nfs-common

Installation

Prerequisites:

  1. Kubernetes Cluster should be up and running. If you are yet to setup the cluster, refer Kubernetes Cluster setup section Kubernetes Cluster Setup.
  2. You have successfully created the namespace and configmap for your license (i.e mosquitto-license).
  3. You have configured your NFS Server by exposing the directories.

Installation using Helm Charts:

Helm charts offer a comprehensive solution for configuring various Kubernetes resources—including stateful sets, deployment templates, services, and service accounts—through a single command, streamlining the deployment process.

  1. Setup the folder on your Control-Plane Node:

    • Make sure you have the mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling folder on the Control-Plane node.
  2. Change Directory:

    • Navigate to the project directory (i.e multi-node-multi-host). cd mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling/kubernetes/multi-node-multi-host-autoscale/
  3. Install Helm Chart:

    • Use the following helm install command to deploy the Mosquitto application on to your OKD cluster. Replace <release-name> with the desired name for your Helm release and <namespace> with your chosen Kubernetes namespace:

      helm install <release-name>  mosquitto-multi-node-multi-host-autoscale-0.1.0.tgz  --set repoPath=<root-path-to-kubernetes-folder>  --set nfs=<your-nfs-ip> -n <namespace>  --set imageCredentials.registry=registry.cedalo.com --set imageCredentials.username=<username> --set imageCredentials.password=<password> --set imageCredentials.email=<email>
      • repoPath: Set the repoPath flag to the path where the folder mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling resides on NFS server. For eg if it exists on /root/mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling therefore the repoPath would be /root or if exists on /home/demo/mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling then the repoPath would be /home/demo.

      • In our case it exists on /root/mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling therefore the repoPath would be /root.

      • namespace: Set it to the namespace of your deployment.

      • Note: If you want to deploy the setup in a different namespace other than multinode, make sure to pass a separate flag --set namespace=<your-custom-namespace> along with the helm installation command.

      • Note: You need to configure the IP of your NFS server by passing --set nfs=<your-nfs-ip> along with the helm installation command. Make sure you use the internal NFS ip accessible from within the Kubernetes cluster and not the external IP exposed to the internet (in case you have one).

      • Note: By default the HPA threshold is set to 60 . That mean Horizontal Pod Scaler will scale the pods if overall CPU consumption passes the 60% threshold. To set a new thresold, you can change pass --set hpa_threshold=<new_hpa_threshold> along with helm installation command.

      • imageCredentials.username: Your docker username provided by Cedalo team.

      • imageCredentials.password: Your docker password provided by Cedalo team.

      • imageCredentials.email: Registered e-mail for accessing docker registry.

      • Note: By default the max pod number is set to 5. That means tha HPA can only scale the max replica pods to 5. If you want set a new higher number, you can set it through NFS server IP is set it by passing --set max_replica=<your-max-replica-count> by passing it along with helm installation command. Make sure you have configured the servers in HACONFIG and also exported the data directories on NFS server for new potential pods/servers.

      • So for eg: If you NFS IP is 10.10.10.10,mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscaling resides at the location /root on your nfs, your name namespace is test-namespace ,username, password and email be demo-username, demo-password and demo@gmail.com, your new hpa threshold is 80 and max replica changed to 6 your arbitrary release name is sample-release-name then your helm installation command should be:

            helm install sample-release-name  mosquitto-multi-node-multi-host-autoscale-0.1.0.tgz    --set repoPath=/root -n test-namespace --set namespace=test-namespace --set nfs=10.10.10.10  --set hpa_threshold=80  --set max_replica=6 --set imageCredentials.registry=registry.cedalo.com --set imageCredentials.username=demo-username --set imageCredentials.password=demo-password --set imageCredentials.email=demo@gmail.com
  4. You can monitor the running pods using the following command: kubectl get pods -o wide -n <namespace>

  5. To uninstall the setup: helm uninstall <release-name> -n <namespace>

Your Mosquitto setup is now running with three single mosquitto nodes and the Management Center. To finish the cluster setup, the Management Center offers a UI to create the Mosquitto HA Cluster. The Management Center is reachable from the localhost via port 31021.

Further Useful Commands:

  • If you want to change mosquitto.conf, you can do so by uncompressing the helm chart, making the required changes and packaging the helm charts again. The detailed procedure is mentioned below:
    • tar -xzvf mosquitto-multi-node-multi-host-autoscale-0.1.0.tgz
    • cd mosquitto-multi-node-multi-host-autoscale/files/
    • Make changes to mosquitto.conf and save it.
    • Go back to the parent directory: cd ../
    • Package the helm chart to its original form using: helm package mosquitto-multi-node-multi-host-autoscale
    • Uninstall helm package helm uninstall <release-name> -n <namespace>
    • Reinstall the helm package using the same command you used the first time from the mosquitto-2.9-mmc-2.9-cluster-kubernetes-autoscale/kubernetes/multi-node-multi-host/ directory.

Create Cluster in Management Center

After you have completed the installation process, the last step is to configure the Mosquitto HA cluster. Access the Management Center and use the default credentials cedalo and password mmcisawesome.

  • Make sure all three mosquitto nodes are connected in the connection menu. The HA proxy will only connect after the cluster is successfully set up.
  • Navigate to Cluster Management and click NEW CLUSTER.
  • Configure Name, Description and choose between Full-sync and Dynamic Security Sync.
  • Configure IP address: Instead of private IP address we will DNS address.
    • For node1: mosquitto-0.mosquitto.multinode.svc.cluster.local and select broker2 from drop-down
    • For node2: mosquitto-1.mosquitto.multinode.svc.cluster.local and select broker2 from drop-down
    • For node3: mosquitto-2.mosquitto.multinode.svc.cluster.local and select broker3 from drop-down
    • Replace "multinode" with your own namespace. If you have used the default one, use the mentioned configurations.
    • mosquitto-0 has to be mapped to the mosquitto-1 node in the MMC UI and so on.
  • Click Save

Connect to cluster

After the creation of the cluster, you can now select the cluster leader in the drop-down in the top right side of the MMC. This is needed, because only the leader is able to configure the cluster. The drop-down appears as soon as you are in one of the broker menus. Go to the "Client" menu and create a new client to connect from. Make sure to assign a role, like the default "client" role, to allow your client to publish and/or subscribe to topics.

Now you can connect to the Mosquitto cluster. You can access it either with connecting it directly to worker node running the haproxy pod along with a service exposed at port 31028:

In this example command we use Mosquitto Sub to subscribe onto all topics: mosquitto_sub -h <ip-of-the-worker-node-running-haproxy> -p 31028 -u <username> -P <password> -t '#'

or if you have an load balancer in front of it redirecting the traffic to HAproxy the we can use the ip of the load balancer (mostly for the setup running on Cloud VMs): mosquitto_sub -h <ip-of-the-load-balancer-running-haproxy> -p 31028 -u <username> -P <password> -t '#' Make sure to replace your IP, username and password.

Usage

Once the installation is complete, you can start using the multi-node Mosquitto broker. Be sure to check the Mosquitto documentation for further details on configuring and using the broker.