Migrating to managed Redis causing slow data rate

We are trying to migrate away from the Docker Compose Redis instance. We’re still on Loraserver (the latest version before the name change to Chirpstack). We migrated the keys with this procedure: How To Migrate Redis Data to a DigitalOcean Managed Database | DigitalOcean. However it seems like we didn’t update the configuration correctly for the application server. So only the network server was pointing to the new instance. We didn’t discover this error before someone pointed out that our devices that are supposed to report data every minute, only reported data 4 times or so during an hour instead.

After fixing the configuration for the application server too, the data rate still hasn’t increased. So now we’re wondering if that wasn’t the problem in the first place, or if the time that went by where the application server and the network server were pointing at different Redis instances has messed something up. I couldn’t find out if there’s something in the documentation that says anything about how Redis specifically is used by both the application and network servers. Maybe I’ll have to dig through the source code?

Could it be something else that is messing this up? Could it be the extra latency introduced when the Redis instance is deployed in Amsterdam, and the Loraserver stack is deployed in Norway?

When I check the frames received and frames transmitted for gateways they are drastically reduced. The pictures below are screenshots of frames transmitted before migrating and then after.

Before :point_up_2:, and after migrating :point_down:
After migrating

When adding a new device that reports every minute, it does so flawlessly. Any suggestions on what we can do to get the data rate back up without physically resetting all of our devices?

We tried resetting the device, and the gateway, but that didn’t help either. We also discovered that when connecting new devices that are supposed to send in every minute, some of them only send in every 15 minutes. Others send in every minute as they should.
Any ideas @brocaar or @bconway?
Can the extra latency introduced when the Redis instance is deployed in Amsterdam, and the Loraserver stack is deployed in Norway affect this somehow?

You haven’t given many details to work with. What is the latency/round trip time of your managed Redis compared to your previous local version? Maybe some benchmarking is in order, as the managed Redis is likely fairly opaque in its resources/performance allocated.

I was just wondering if it is possible at all that the added latency could be the cause. We’ve reverted back to the Docker Compose Redis, and now the data rate is back up again.

Certainly a possibility. It could also be rate limiting on the part of your managed Redis provider.

It could. If the latency is too high, downlinks will fail as these are sent to the gateways too late. If the devices are using ADR, they regularly require a downlink or else the ADR backoff kicks in and the device starts lowering its data-rate.