High Availability Autoscaling
To set up a multi-node-HA Mosquitto broker and Management Center with autoscaling using Helm charts, you'll first need a Kubernetes environment. For deploying a full fledged kubernetes cluster on multiple hosts, Kubeadm is an excellent choice. Kubeadm is a command-line tool in the Kubernetes ecosystem designed to facilitate the process of setting up and bootstrapping a Kubernetes cluster.(Discussed in Introduction section). This setup would deploy a 3 Mosquitto broker as a statefulsets. Also, a Management-Center pod and HA-proxy pod as a deployment entity. All the deployment would be deployed on multiple hosts. This deployment by default uses a NFS server to mount volumes. You would need to setup the NFS server before using this deployment.
Why Auto-scaling ?
When we deploy the Kubernetes setup using the above procedure, by default we start with 3 Mosquitto Pods, 1 MMC and 1 HA. However, we might run into problems if we have a lot of incoming requests and connections causing overload at Mosquitto brokers, especially in DynSec mode. We would want the setup to adjust based on the load to avoid crashes and maintain system requirements and at the same time avoid any need of human monitoring and intervention.
How does Auto-scaling works?
On deploying the above setup we also deploy certain other helper pods that takes care of Auto-scaling. For eg:
Metrics Server:
This server pod monitors metrics of the deployed applications pods. Metrics could be CPU, Memory etc.Horizontal Pod Scaler (HPA):
HPA automatically scales up or down the pods based on the threshold. For eg: If the CPU threshold is set to 60%, and of overall CPU consumption across all pods reaches 60%, HPA scales up the Mosquitto pods.Cluster-operator:
This pod keeps tracks of pod scaling and triggers the requests to MQTT APIs so that newly scaled pods gets added to internal cluster of Mosquitto brokers. For eg If the current number of Mosquitto brokers are 3 and it scales to 5, then cluster-operator would send aaddNode
andjoinCluster
MQTT request for 2 added nodes. If pod is to be scaled down, then the cluster-operator would sendremoveNode
andleaveCluster
MQTT API requests.
Recommended Setup
- 1 Control-plane node, 3 worker nodes and a NFS Server
- You can name them control-plane, node-1, node-2, node-3, nfs-server. If you want to have different names for your machines you would have to uncompress the helm charts and change the hostnames entries of
values.yaml.
tar -xzvf mosquitto-multi-node-multi-host-autoscale-0.1.0.tgz
cd mosquitto-multi-node-multi-host-autoscale
- Change the values of hostname from node-1/node-2/node-3 to the names of your machines.
- Package the helm charts
helm package mosquitto-multi-node-multi-host-autoscale
- Follow the installation section.
HA-PROXY Configurations
HA-proxy need to be configured accordingly for the kubernetes setup. For eg server m1, m2 and m3 needs to be configured in this case. You would need to configure more server based on your requirements and based on the number mounts you have created on NFS. The autoscaling setup may scale up and down your deployment, so make sure you setup atleast 6 server entries in your haproxy.cfg
. Instead of using docker IP we would use DNS names to address the pods. For eg
mosquitto-0.mosquitto.multinode.svc.cluster.local
. Here mosquitto-0
,mosquitto-1
,mosquitto-2
are the name of individual mosquitto pods running as statefulsets. Each new pod would increase its pod-ordinal by 1. Template of the connection endpoints can be defined as follows
<pod-name>.<name-of-the-statefulset>.<namespace>.svc.cluster.local
In the below config, we have configured 6 servers:
global
daemon
maxconn 10000
resolvers kubernetes
nameserver dns1 10.96.0.10:53 # Replace with your Kube DNS IP
resolve_retries 3
timeout retry 1s
hold valid 10s
frontend mqtt_frontend
bind *:1883
mode tcp
default_backend mqtt_backend
timeout client 10m
backend mqtt_backend
timeout connect 5000
timeout server 10m
mode tcp
option redispatch
server m1 mosquitto-0.mosquitto.multinode.svc.cluster.local:1883 check resolvers kubernetes on-marked-down shutdown-sessions
server m2 mosquitto-1.mosquitto.multinode.svc.cluster.local:1883 check resolvers kubernetes on-marked-down shutdown-sessions
server m3 mosquitto-2.mosquitto.multinode.svc.cluster.local:1883 check resolvers kubernetes on-marked-down shutdown-sessions
server m4 mosquitto-3.mosquitto.multinode.svc.cluster.local:1883 check resolvers kubernetes on-marked-down shutdown-sessions
server m5 mosquitto-4.mosquitto.multinode.svc.cluster.local:1883 check resolvers kubernetes on-marked-down shutdown-sessions
server m6 mosquitto-5.mosquitto.multinode.svc.cluster.local:1883 check resolvers kubernetes on-marked-down shutdown-sessions
10.96.0.10
is the Kube-api server IP. We add nameserver so that the HA-proxy do not crash when some of the servers are not available as in autoscaling the pods server may scale up and down.
Installation
Prerequisites: Kubernetes Cluster should be up and running. If you are yet to setup the cluster, refer Kubernetes Cluster setup section Kubernetes Cluster Setup.
Once kubernetes cluster is up and running and you have installed all the configmaps through setup.sh
, you can now follow these steps to install the multi-node Mosquitto broker setup:
Setup the folder on your local machine:
- If not done previously, copy or setup the
mosquitto-2.7-mmc-2.7-cluster-kubernetes-autoscaling
repository to the NFS server and Control-plane. For both instances the path of the repository has to be the same. Also make sure to create a directory inside the repository on Control-plane namedlicense
that contains thelicense.lic
file we provided you.
- If not done previously, copy or setup the
Setup NFS Server
- Install necessary dependencies
sudo apt-get update
sudo apt-get install nfs-kernel-server
- Configure exports directory.Open the
/etc/exports
file on NFS-server. Expose the directories so that pods running on other nodes can access these directories.- You can use the following as a reference. Here we expose three mosquitto nodes and management center.
/home/<user>/mosquitto-2.7-mmc-2.7-cluster-kubernetes-autoscaling/server1/mosquitto/data *(rw,sync,no_root_squash,no_subtree_check)
/home/<user>/mosquitto-2.7-mmc-2.7-cluster-kubernetes-autoscaling/server2/mosquitto/data *(rw,sync,no_root_squash,no_subtree_check)
/home/<user>/mosquitto-2.7-mmc-2.7-cluster-kubernetes-autoscaling/server3/mosquitto/data *(rw,sync,no_root_squash,no_subtree_check)
/home/<user>/mosquitto-2.7-mmc-2.7-cluster-kubernetes-autoscaling/server4/mosquitto/data *(rw,sync,no_root_squash,no_subtree_check)
/home/<user>/mosquitto-2.7-mmc-2.7-cluster-kubernetes-autoscaling/server5/mosquitto/data *(rw,sync,no_root_squash,no_subtree_check)
/home/<user>/mosquitto-2.7-mmc-2.7-cluster-kubernetes-autoscaling/server6/mosquitto/data *(rw,sync,no_root_squash,no_subtree_check)
/home/<user>/mosquitto-2.7-mmc-2.7-cluster-kubernetes-autoscaling/server1/management-center/config *(rw,sync,no_root_squash,no_subtree_check)- Install necessary dependencies
- Make sure all the
data
directories have adequate privilges so that mosquitto kubernetes pods can create additional directories inside thesedata
directories. If you face any issues likepermission denied
you can try giving the1000
ownership to all the relevantdata
dir using following command:sudo chown -R 1000:1000 /home/<user>/mosquitto-2.7-mmc-2.7-cluster-kubernetes/server1/mosquitto/data
sudo chown -R 1000:1000 /home/<user>/mosquitto-2.7-mmc-2.7-cluster-kubernetes/server2/mosquitto/data
sudo chown -R 1000:1000 /home/<user>/mosquitto-2.7-mmc-2.7-cluster-kubernetes/server3/mosquitto/dataNote
: We provide ownership of1000
as mosquitto kubernetes pods uses user id of1000
by default.- Expose the directories using:
sudo exportfs -a
- Restart the kernel-server
sudo systemctl restart nfs-kernel-server
- Install neccessary nfs-common on other nodes so that they act as nfs-clients
sudo apt-get install nfs-common
Change Directory:
- Navigate to the project directory (i.e multi-node-multi-host-autoscaling).
cd mosquitto-2.7-mmc-2.7-cluster-kubernetes-autoscaling/kubernetes/multi-node-multi-host/
- Navigate to the project directory (i.e multi-node-multi-host-autoscaling).
Install Helm Chart:
- Use the following
helm install
command to deploy the setup to your Kubernetes cluster. Replace<release-name>
with the desired name for your Helm release and<namespace>
with your chosen Kubernetes namespace:helm install <release-name> mosquitto-multi-node-multi-host-autoscale-0.1.0.tgz --set repoPath=$HOME -n <namespace>
repoPath
: Set therepoPath
flag to the path where this repo was cloned. The above command expects it to be at$HOME
.namespace
: Set it to the namespace of your deployment.Note
: Ensure that you have a running Kubernetes cluster set up to run this deployment. We recommend setting up the kubernetes cluster using Kubeadm. You can also use our installation script to set up Kubernetes cluster. To setup the kubernetes cluster, follow the instructions in Kubernetes Cluster Setup.Note
: If you want to deploy the setup in a different namespace, make sure to pass a seperate flag--set namespace=<your-custom-namespace>
along with the helm installation commandNote
: By default the NFS server IP is set to 10.156.0.10 .You can change the NFS server IP by passing--set nfs=<your-nfs-ip>
by passing it along with helm installation command.Note
: By default the HPA threshold is set to 60 . That mean Horizontal Pod Scaler will scale the pods if overall CPU consumption passes the 60% threshold. To set a new thresold, you can change pass--set hpa_threshold=<new_hpa_threshold>
along with helm installation command.Note
: By default the max pod number is set to 5. That means tha HPA can only scale the max replica pods to 5. If you want set a new higher number, you can set it through NFS server IP is set it by passing--set max_replica=<your-max-replica-count>
by passing it along with helm installation command. Make sure you have configured the servers in HACONFIG and also exported the data directories on NFS server for new potential pods/servers.- So for eg: If you NFS IP is 10.10.10.10,your name namespace is
test-namespace
,your new hpa threshold is 80 and max replica changed to 6 your arbitrary release name issample-release-name
then your helm installation command should be:
helm install sample-release-name mosquitto-multi-node-multi-host-0.1.0.tgz --set repoPath=$HOME -n test-namespace --set namespace=test-namespace --set nfs=10.10.10.10 --set hpa_threshold=80 --set max_replica=6
- Use the following
You can monitor the running pods using the following command:
kubectl get pods -o wide -n <namespace>
Open Applications
http://localhost:31021
(MMC)- IP: ip-of-host-running-ha-proxy & Port: 31028 (HA)
- To uninstall the setup:
helm uninstall <release-name> -n <namespace>
Kubernetes Cluster Setup
Dependencies and Prerequisites
- Docker
- Kubernetes Cluster with Kubeadm
- Helm
If you need to set up a Kubernetes cluster, you can refer to our installation script. If you plan on using your own cluster, you can skip to step 4. Follow these steps on your master/control-plane node :
Setup the ha-cluster setups folder:
- Copy or setup the
mosquitto-2.7-mmc-2.7-cluster-kubernetes-autoscaling
repository to the NFS server and Control-plane. For both instances the path of the repository has to be the same. Also make sure to create a directory inside the repository on Control-plane namedlicense
that contains thelicense.lic
file we provided you.
- Copy or setup the
Choose Architecture Folder:
- Depending on your host architecture, navigate to the corresponding folder:
- For Debian AMD64:
cd mosquitto-2.7-mmc-2.7-cluster-kubernetes-autoscaling/kubernetes/multi-node-multi-host-autoscaling/debian_amd64
- For Debian AMD64:
- Depending on your host architecture, navigate to the corresponding folder:
Install Common Dependencies:
- Run the following command to install the necessary dependencies on all the nodes(including control-plane node):
bash common-installation-debian.sh
- Run the following command to install the necessary dependencies on all the nodes(including control-plane node):
Install Master Dependencies:
- Run the following command to install the necessary dependencies on the master node:
bash master-installation-debian.sh
- Run the following command to install the necessary dependencies on the master node:
Setup Kubernetes: IMPORTANT
- Execute the setup script to configure Kubernetes:
bash setup.sh
- Follow the instructions of this file. This would create configmap for mosquitto.conf, create configmaps for your license file. Make sure you have placed your license file in the required folder before running this script i.e at
/home/<user>/mosquitto-2.7-mmc-2.7-cluster-kubernetes/license/license.lic
(if license folder does not exist ,create it and add your license.lic file) - The default namespace for installation is
multinode
. You can change the namespace through the setup script. Script will prompt you to choose the namespace of your choice.
- Follow the instructions of this file. This would create configmap for mosquitto.conf, create configmaps for your license file. Make sure you have placed your license file in the required folder before running this script i.e at
- Execute the setup script to configure Kubernetes:
Setup Kubernetes Secrets:
- This step is required for kubernetes to pull required docker images from the registry. You can set the secrets using the following command:
kubectl create secret docker-registry mosquitto-pro-secret --docker-server=registry.cedalo.com --docker-username=<username> --docker-password=<password> --docker-email=<email> -n <namespace>
- This step is required for kubernetes to pull required docker images from the registry. You can set the secrets using the following command:
namespace
:namespace
should be the same as the one you selected or enetered while running thesetup.sh
.docker-username
: Your docker usernamedocker-password
: Your docker passworddocker-email
: Email registered for accessing docker registry
Further Useful Commands:
- If you want to change mosquitto.conf. Make the required changes in mosquitto.conf then delete the configmap and create a new one. Make sure to uninstall the deployment before making the change.
- You can uninstall the setup using the following command:
helm uninstall <release-name> -n <namespace>
- To delete the configmap:
kubectl delete configmap mosquitto-config -n <namespace>
- To reconfigure the configmap:
kubectl create configmap mosquitto-config -n $namespace --from-file=<path-to-mosquitto.conf>
- If you want to customize the deployments, you can unzip the package using:
tar -xzvf mosquitto-multi-node-multi-host-autoscale-0.1.0.tgz
- Make the changes and repackage the folder mosquitto-multi-node-single-host using:
helm package mosquitto-multi-node-multi-host-autoscale
Usage
Once the installation is complete, you can start using the multi-node Mosquitto broker. Be sure to check the Mosquitto documentation for further details on configuring and using the broker.