Setting Up HA Clusters
In the following, it is explained how to reconfigure the default cluster installer to run on three separate nodes. The following assumes that you have successfully created a private network with three servers, which have IP addresses available internally and externally.
It is assumed that you have placed the individual server folder on each node and that the mosquitto
user is the owner of the folder and its contents. This step can be done later if you choose to remove the cluster via the UI.
Also, make sure the needed ports are opened. In the private network, ports 1885, 1883, and 7000 must be open on each server. For the external network, by default, we will use port 1883. If you want to use a different port (e.g., 8883 or 9001, usually used for MQTTs and WSS), please also open these.
Follow the next steps for the reconfiguration.
Remove the existing cluster
There are two ways to delete the existing cluster of the default setup.
Remove via UI
If you have already started the default setup successfully, you can simply go into the UI and delete the cluster. This completely resets the nodes into a ready-to-join status for a new cluster, which will be created after reconfiguring your setup details on the three nodes you have already set up.
To do so open the UI and "Edit" the current cluster connection details. Delete the cluster by clicking the bin icon und "Public Cluster connection". This deletes the existing cluster. Now all nodes are ready to be deployed.
Remove in the file system
It is possible to remove the nodes from the cluster manually by deleting the ha-config.json
file and the contents of the sync folder manually.
You can find these files in each server folder in mosquitto/data
.
When starting the UI for the first time, we will adjust/delete the saved settings from the UI and start fresh in the cluster creation.
Reconfigure files
Mosquitto configuration
The mosquitto.conf
file will not need any changes at this moment unless you are looking to add/configure ports.
HAProxy configuration
The HAProxy configuration by default looks like this:
global
daemon
maxconn 4096
frontend mqtt_frontend
bind *:1883
mode tcp
default_backend mqtt_backend
timeout client 10m
backend mqtt_backend
timeout connect 5000
timeout server 10m
mode tcp
option redispatch
server m1 172.20.1.1:1888 check on-marked-down shutdown-sessions
server m2 172.20.1.2:1888 check on-marked-down shutdown-sessions
server m3 172.20.1.3:1888 check on-marked-down shutdown-sessions
The mqtt_frontend
maps incoming traffic from port 1883 to either one of the three endpoints given in the mqtt_backend
section. These are the three addresses for the cluster nodes in the ip:port format. Replace the IP addresses with the internal IP addresses from the private network of each node. To avoid port allocation problems, by default, port 1883 is mapped to port 1888 on Mosquitto.
server m1 <privateip.server1>:1888 check on-marked-down shutdown-sessions
server m2 <privateip.server2>:1888 check on-marked-down shutdown-sessions
server m3 <privateip.server3>:1888 check on-marked-down shutdown-sessions
In a single node setup HAProxy is not used. It is neccessary in a cluster to map incoming traffic from clients to the cluster leader and is further used for TLS termination in case of certificate usage. If you have followed the configuration advice from the certificate configuration mentioned above, you will already have set the TLS termination the way you need it.
Docker-compose configuration
If you are using a Docker setup, you will need to change some entries in the docker-compose.yml
file.
The default file looks something like this:
services:
mosquitto:
image: registry.cedalo.com/mosquitto/mosquitto:<version>
volumes:
- ./mosquitto/config:/mosquitto/config
- ./mosquitto/data:/mosquitto/data
- ./license:/mosquitto/license:ro
hostname: mosquitto2
networks:
mosquitto:
ipv4_address: 172.20.1.2
environment:
CEDALO_LICENSE_FILE: /mosquitto/license/license.lic
MOSQUITTO_DYNSEC_PASSWORD: 1LD1RGNgFu1u4tfasOh1j7vOrCLhMVq0
restart: unless-stopped
haproxy:
image: haproxy:2.7
ports:
- 127.0.0.1:1884:1883
volumes:
- ./haproxy:/usr/local/etc/haproxy:ro
restart: unless-stopped
networks:
mosquitto:
ipv4_address: 172.20.2.2
networks:
mosquitto:
name: mosquitto
ipam:
driver: default
config:
- subnet: 172.20.0.0/16
In the default setup, all communication was done within the local Docker network. Now the private network is used for that communication.
Therefore, the Mosquitto broker needs to also map its port to the local network by adding:
services:
mosquitto:
ports:
- <privateip>:1888:1888
This way, the HAProxy instances from all nodes can redirect traffic to each node in the network.
Additionally, we need to update the port mapping of the HAProxy configuration:
ports:
- 1883:1883
Remove the bind to the localhost (127.0.0.1). This allows access to that port via all interfaces. For servers 2 and 3, you will need to adjust the 1884:1883 and 1885:1883 bindings to 1883:1883 for each of them. This was a temporary measure to avoid port conflicts in the default setup.
Recreate Cluster
Now you can start the setup again and access the UI via localhost:3000
Reconfigure your cluster via the UI. Add the new IP adresses to the nodes and create the new cluster. After adding the first node the UI will use inputs like credentials from node 1 in the next nodes you connect. This speeds up the process. Change only things needed.
At least three accessible nodes are needed for the creation of the cluster. After the creation enter the public IP/DNS configured in the next step and finish the process.
This should point to the HAProxies or a DNS which redirects to the HAProxies. Click "Done" to finish the setup.
Cluster Access for High Availability
Now your cluster is available via each HAProxy IP address.
For a client to access the cluster, it should only know one entry point for the connection. Using the direct IP address of a node is not sufficient.
There are multiple ways to make the cluster available:
DNS
One way to ensure that your clients connect to the Mosquitto cluster seamlessly is to use DNS (Domain Name System) to provide a single endpoint. With DNS, you can map a domain name to multiple IP addresses (the IPs of your three Mosquitto nodes). This technique is often referred to as "round-robin DNS."
When a client attempts to connect to the domain name, the DNS server provides one of the IP addresses in a rotating sequence. This makes sure the client connections can reach all HA Proxies across all the nodes in the cluster. If one node goes down, the client can automatically reconnect to another node when the DNS query returns a different IP address.
Virtual IPs
Another approach to ensure high availability and seamless failover in your Mosquitto cluster setup is to use Virtual IPs (VIPs). A Virtual IP is a single IP address that can be dynamically mapped to one of the nodes in your cluster.
In a three-node setup, a VIP can be configured to move between nodes based on their availability or health status. If the node currently hosting the VIP fails, the VIP can automatically transfer to another healthy node in the cluster, ensuring that clients always have a consistent endpoint to connect to.
VIPs are typically managed using failover protocols like VRRP (Virtual Router Redundancy Protocol) or keepalived, which handle the logic of switching the IP address from one node to another based on predefined conditions.
By combining VIPs with your Mosquitto cluster, you ensure that your clients have a single, reliable point of entry, even if individual nodes within the cluster become unavailable.