It’s been mentioned that other components of Chirpstack are stateless so scaling isn’t an issue. However, in Chirpstack High Availability (HA) and scaling it says to be careful around the Gateway Bridge. As performing session affinity via Source IP should mitigate issues around state, can someone describe what potential issues will occur if we perform auto-scaling or performing a rolling release? Is it message loss due to fragmentation of UDP packets?
The problem you might see is that if you run a HA setup of the Gateway Bridge (e.g. GWB1 & GWB2), is that part of the UDP frames are routed to GWB1, the other part to GWB2 because UDP is stateless in contrast to TCP. Thus you should confirm that your load-balancer is able to always forward data from the same source to the same backend. Else all Gateway Bridge instances will subscribe to all the gateway topics and start sending downlink data together to the gateway.
Thanks @brocaar. LoRa is an inherently lossy protocol anyway so adds another potential point of packet loss. It just adds more weight to prioritising Basics Station WebSocket TCP/IP connection over the UPD Packet Forwarder. We can at least gracefully stop the connection when it scales down.
An alternative is using the ChirpStack MQTT Forwarder on the gateway, then you do not need to run the ChirpStack Gateway Bridge
Hi Orne,
If we were to use the MQTT Forwarder on the gateway itself, would you be able to suggest any method you might know of to install to >1000 gateways without having to plug and install manually to each?