We currently have limits of 400m CPU and 512Mi on our K8s deployment. This is far more than enough for the 2 gateways and 40-50 devices we have available to us to test with, however this number of devices is 1/1000th of the devices we will have to handle in production.
Can anyone please advise what kind of CPU/Memory you would recommend for ~100 gateways and ~150k devices flowing through a single GWB instance?
If numbers can’t be given, I would at least like to find out if performance will reliably always be linear, or if there’s a certain limit where performance begins to drop no matter how many resources a pod may have.
1 Like
GWB handles lightweight LoRa packet forwarding, so its resource usage is typically low. However, providing an exact recommendation is difficult because multiple factors must be considered.
Based on your request, and assuming each device transmits once every 15 minutes, it’s reasonable to estimate that a server with 2 vCPUs, 4 GB RAM, and NVMe storage for the OS should be able to handle the load for around 100 gateways and 150k devices.
You might also consider placing a Python-based queue between the MQTT broker and your processing backend. This can help rate-limit, avoid overload, and ensure more controlled, linear message handling during spikes.
I think the best approach, is to monitor your load (like with htop
) as you increase the number of sensor and gateway. When you get close to overload, you add more power to your server.