Recomended settings

Hello community,

I have ChirpStack installed on Ubuntu on AWS.
I’m currently performing regular Redis and PostgreSQL backups, and I’m using HTTPS integration to forward my data to an external application.
I’m wondering what other settings or configurations I should consider adjusting?

My goal is to scale up to 1,000 devices and 100 gateways.
Are there any best practices I should follow as the number of devices and gateways increases?

Thanks in advance!

If you haven’t already, it’s always necessary when creating production deployments to have TLS on all WAN interfaces of your Chirpstack server. For me that is MQTT (8883) the web interface (443) and gRPC (also 443). Also if your form of TLS does not include authentication, you likely also want to add that using for example a username / password on MQTT broker.

Depending on how serious you are, you should also probably consider some form of redundancy so if your server fails there is another that could take it’s place seamlessly. There are many ways to do this, depends on your use case and what works best for you, but I was researching this topic a few months ago and if your curious you could check my plan out:

@Liam_Philipp Thanks for this! I have a few more questions, if I may.

My devices (planning to have 1,000 or more) send a downlink every 20 minutes.
My AWS server has 4GB of RAM ( with quick option to uprade) and a very stable connection.
There are no other applications running on my Ubuntu instance.

Given this setup, do you think creating a cluster is necessary?
I know setting up a cluster could be a lifesaver in case of issues :slight_smile:
But I’m wondering—how often do those kinds of issues actually happen?
In your experience, is there a risk that the system could crash?

Not to be the “um actually” guy but I think you mean uplink here. Uplink is device → server. I only mention it because If you do mean downlink, as in all of your uplinks are confirmed and also need downlink responses from the server, you may run into some issues other then server specs as gateways only have a single channel assigned for downlinks. So you can bottleneck the channel for downlinks much faster then the 8 channels for uplinks.

Our LoRaWAN network is still in dev / testing, and we haven’t yet gotten that many devices to really test it’s capabilities. But from what I’ve read in the forum this should be okay. I wouldn’t be surprised if you need to up the ram a bit (6GB) but 4GB is probably in the ballpark.

For your case i’d say clustering is not, however redundancy/high availability should still be considered. It really depends on your application and clients. If the network is for your own monitoring purposes and “who cares” if it goes down for a day and you can rebuild it, so be it. However if some sort of failure would cause vital services to be offline, or you would receive hundreds of phone calls from angry customers, it can save you a lot of time and effort to simply spend a few days implementing even the simplest high-availability measure.

In my year+ of using Chirpstack I have never seen it fail once when in regular operation. It really is a reliable software. However I have seen servers fail, networks fail, or even updates fail. So take that as you will and weigh your own risks.

Of course was thinking about uplink ! :slight_smile:
One more question — should I worry about the Redis database and clean it up once in a while, or is it unnecessary and not something I should bother with at all? What’s your opinion?

You don’t need to worry about redis/postgres cleanup, the only cleanup I would consider is the logs. My docker logs do get large after a long runtime and I am not sure if they are self-cleaning by default, but that’s an easy fix in your docker-compose.yml.

In few years I’ve seen pretty much the same. Never had issue with Chirpstack, but mosquitto failed, network failed, server HW failed, updates can cause unexpected issues.

Could say that approx once a year some kind of major issue happens, but that is highly dependant on specific setup.

1 Like