I’m just curious if anyone has an example of a custom ADR algorithm they have implemented. I’ve searched the form and most of the talk about ADR is all from V3.0 or earlier. I found a post from @Alex9779 from 2 years ago where he created a few and posted them to his github https://github.com/Alex9779/Chirpstack-ADR/tree/master/v4, just hoping someone may have a more current one.
I’m trying to work out the logic to make an ADR for US915 that prioritizes keeping the device in DR1 to DR3, and only using DR0 if absolutely necessary. Essentially if I get 10% of my packets in DR1 then broadcasting in DR1 is fine. Only if I go below that should the device switch to DR0. This is mostly because of the drastic decrease in data sent at DR0.
Any help in figuring this out is greatly appreciated.
Uplink Monitoring: The network server monitors a device’s uplink transmissions—especially signal strength indicators like RSSI (Received Signal Strength Indicator) and SNR (Signal-to-Noise Ratio).
Analysis: If the link quality is good (e.g., strong signal, low noise), the network can instruct the device to increase its data rate or lower transmit power. This results in shorter airtime and less energy use.
Commands via Downlink: The network sends MAC commands to the device with the new recommended settings.
Fallback & Relearning: If communication starts failing (due to movement, interference, etc.), the device can automatically lower its data rate and increase power again, seeking a better link. Some devices do this autonomously if they stop receiving network commands.
I have used an ADR strategy where the sensor device decides when to increase the transmission power and lower data rate to DR0 when it needs to (with a high probability) send a message to the base station. After that it falls back to use standard ADR algorithm. This has worked quite well and with customized ADR you can make it even better?
I understand the basics of how ADR works, was just hoping for a few examples to get me moving in the correct direction.
In the standard ADR there is a ‘requiredHistoryCount’ that requires a unit to have 20 packets in history before it will modify the ADR.
My current though is to use req.uplinkHistory and use that to compare the fCnt of the packets to determine how many were missed based on the fCnt of the last 20 received. Then I just have to apply that calculation to the getIdealTxPowerIndexAndDR step when a device is at DR1.
I have a bit of running test code now, but I’m not sure if it is working correctly or if my end device is not hearing the UnconfirmedDataDown packets response to the ADR request.