ADR algorithm and configuration

We are using the latest release and started to focus in on in the field testing of devices. We seem to have run into an issue with the ADR design and I feel like I’m missing a configuration.

Basically, we are using US902 and our nodes are capped at 20dBM. The only configurations that I’m aware of for ADR are the installation_margin (defaulted to 10 and unchanged in our system) and the min DR/max DR values.

The problem is that the loop seems to unroll as adjust TX power levels first and foremost and then I guess SF. I’ve watched the server request our devices to try to go to 22, 24, 26, 28 dBM even though they don’t have this capability. As I’d expect when this happens you can watch the SNR just continue to go down because the nodes can’t adjust until eventually you start dropping packets. For some reason, I’ve never seen our system try to adjust SF to anything other then 7.

How does the ADR loop work? How do I tell the system that our nodes can’t go above 20 dBM? What is installation margin and when would you expect the server to signal a SF change instead of just TX power?

Thanks,
Patrick

Hi Patrick, to describe the ADR engine in short:

It calculates the link budget when receiving the uplink based on the SNR and the min. SNR required for the used spreading-factor. When that link budget is calculated, it subtracts the installation-margin from it. That value divided by 3 defines the number of steps.

Given the number of steps is positive, it means it can increase the data-rate (decrease the spreading-factor) by that number of steps until it reaches the max. DR.

When there are steps left, then it will use these to decrease the TXPower. E.g. for EU this is:

image

Currently when the number of steps is negative, it will increase the TXPower if possible. It will never decrease-the data-rate! See also: https://github.com/brocaar/loraserver/blob/master/internal/adr/adr.go#L174.

Please note that a device should ACK when the requested TXPower is not implemented (LoRaWAN 1.0.2 spec):

Interesting. Just out of curiosity, why will it never decrease the data-rate? And with that in mind, what determines the minimum data rate and how/when is that set?

The min. DR is currently not used by the implemented algorithm but is specified as a device-profile field by the LoRaWAN Backend Interface specification.

LoRa Server never decreases the DR as this could result into a domino effect. E.g. when your network is dense, then lowering the data-rate on one device will impact other devices so other devices might also need to change to a lower data-rate (following the ADR algorithm), and so on…

Note that devices have an automatic data-rate decay implemented so in case of disconnection from the network, they will lower their data-rate until they are connected again.

Yes, that’s what I suspected and that makes sense. Does the device decay algorithm ever go below the DR set in the device-profile? In other words, is the device-profile DR the “floor” or can a device decay beyond that rate if it disconnects?

Yes the device could go below that value. I think it goes to the lowest data-rate possible. Failing that, it goes back into join-state. This is documented in more details in the LoRaWAN specification.

I’m looking to see if changing the installation_margin value could help some devices that have poor RSSI and SNR values. I think if I increase the installation_margin, then the DR increase will be less aggressive. Can you confirm that I am thinking of this correctly?

Based on your previous post, the formula is:

(link_budget - installation_margin)/3 = number of data_rates that LoRaServer moves.

Am I understanding that correctly?

Yes, that is correct.

1 Like

Hi @brocaar, can you provide me with information on “nb_trans” management in ADR algorithm ? Is it possible to avoid the server to set nb_trans > 1 ?
Thank you for your help.
Sebastien.

Hi, someone knows something about this “nb_trans” in ADR management ? I would like to control it and be sure that the algorithm never set a value higher that 1…

hi @brocaar
i can’t find this in the LoRaWAN specifications. i read everything in the specifications related to ADR - but could not find it or did not understand it.

can you guide me to this part or post a printscreen of it?

thanks a lot,
sil

Actually, I might have been wrong about the re-join. However a device will start lowering its data-rate if it detects that it is not receiving any response back when the ADRACKReq bit is set.

thanks,

so if i have “adrAckReq:false” in my uplink frames the device has no chance to know if he should start lowering DR?

Do you know if other LoRa-Servers (the things network. or others) has a implementation of ADR Decrease or is it “fobidden” to do so by the specifications?

regards, sil

The lowering of data-rate usually happens in the device firmware stack, so independent of the Network Server that you are using.

so if i have “adrAckReq:false” in my uplink frames the device has no chance to know if he should start lowering DR?

In short when this bit is set, a Network Server must send a downlink. This gives the device the confirmation that it is still connected (e.g. the NS sent back a downlink reply). Please refer to the LoRaWAN Specification for full details on the adrAckReq bit.

1 Like

My problem with ADR:
In loraserver in “Service profile” I setup Minimum allowed data-rate = 0 and Maximum = 4 (for US915 region).
Did a testing of 3 different end-node devices: RAC612 LoRaButton, Dragino I/O Controller LT-33222-L and STM dev B-L072Z-LRWAN1.
All devices used DR = 0 in Uplink:

"txInfo": {
                "dr": 0,
                "frequency": 905300000
            }

Also, each Downlink the server is sent LinkAdrReq:

...
     {
                                "cid": "LinkADRReq",
                                "payload": {
                                    "dataRate": 4,
                                    "txPower": 2,
                                    "chMask": [
                                        false,
                                        false,
                                        false,
                                        false,
                                        false,
                                        false,
                                        false,
                                        false,
                                        true,
                                        true,
                                        true,
                                        true,
                                        true,
                                        true,
                                        true,
                                        true
                                    ],
                                    "redundancy": {
                                        "chMaskCntl": 0,
                                        "nbRep": 1
                                    }
                                }
                            }

But all devices answer back with false in “dataRateAck”:

"fOpts": [
                        {
                            "cid": "LinkADRReq",
                            "payload": {
                                "channelMaskAck": true,
                                "dataRateAck": false,
                                "powerAck": true
                            }
                        },
                        {
                            "cid": "LinkADRReq",
                            "payload": {
                                "channelMaskAck": true,
                                "dataRateAck": false,
                                "powerAck": true
                            }
                        }
                    ] 

Uplink FCtrl ADRACKReq bit is false:

"fCtrl": {
                        "adr": true,
                        "adrAckReq": false,
                        "ack": false,
                        "fPending": false,
                        "classB": false
                    },

So, my question is - what is wrong with ADR?
In my settings in loraserver or all 3 different devices just don’t want to use ADR?
BTW - all 3 devices is setup to use ADR…

Just did some experiments - set each device do not use ADR. The same settings on loraserver:

disable_adr=true

By default each device used DR=3. Works good - I don’t see any MAC messages and I can send Uplink more bytes in application payload.
As I understand I disable the most useful feature in loraserver, but I really cannot make it work.

Did somebody successfully use ADR feature?
Please share your experience and settings on bith sides - loraserver and motes!

Thanks.

@SDA,

Did you get more info arround nb_trans limitation to 1? I guess not otherwise I should write this down here!

@brocaar,

As @SDA said, I didn’t find informiton about low leve retransmission management in ADR algorithm.
Do you have some info about your network server implementation?

@e.salles, no information received…
@brocaar, can you give us update on that ? Thank you.

I had a look at this document:


I think the ADR algo is described here.
Questions:
  1. Is chirpstack using the latest revision of this?
  2. The document says installation margin should be implemented on a per-device basis, perhaps this could be a nice feature to request.
  3. Is repetition rate control implemented in chirpstack ADR?
    Cheers
    P

Hi Everybody, @brocaar Do you have documentation about ADR algorithm that chirpstack implements? Is thealgorithm that @subgig indicated in the comment before?. Also there is an API documentation?