Chirpstack redis memory usage

Hi Everyone,

We are using Chirpstack v3 and recently onboarded 150k devices into our system which earlier had just 5k devices. All our components are deployed in kubernetes and we use persistence volume for redis pod. We did increase the memory for Chirpstack components but we are noticing chirpstack redis(running on a single node) is creeping up on pod memory as more devices are getting activated (likely due to Chirpstack v3 saving device activation parameters in redis).

Is there any recommendations on how to scale and handle this?

Have you considered redis cluster? The sharding between multiple instances could help.

Also, probably not a feasible solution, but migrating to V4 would obviously be the biggest improvement.

1 Like

Thanks for the answer. Another area we are looking at is the aggregation metrics data where we can clear up some space.
We want our devices to be active even if it does not send any data for 10 years, so we have updated device session ttl to 10 years in network server. Can we reduce the aggregation matrix in application server to 1 hour? Is it completely unrelated to device session?

I have only ever used V4 so I cannot be 100%, but ya in V4 you can set those aggregation metrics to as few points as possible, it is purely for the graphs in the UI.