On initialisation, devices in AU must assume dwell time limiting is enabled (UplinkDwellTime = 1, 400ms dwell), so initial data rate must be DR2. (Reference “LoRaWAN 1.0.3 Regional Parameters”, section 2.6.2; “LoRaWAN 1.1 Regional Parameters”, section 2.6.2)
Note this is a key behaviour difference between AU915 and US902. US902 starts at DR0. Also note that this is a relatively new problem as the requirement to use DR2 as the starting DR was introduced in regional parameters 1.0.2b as it was DR0 in previous specifications, so older end node devices are not affected.
When ADR is enabled, this is causing the LinkADRReq to be rejected as it contains “dataRate: 0”, which is below the minimum permitted value of DR2. The LinkADRAns has the Data Rate Ack bit cleared.
According to the LoRaWAN specification:
“If any of those three bits equals 0, the command did not succeed and the node has kept the previous state.”
The LoraServer continues to send LinkADRReq with exactly the same parameters and it keeps being rejected by the device as the DR is too low.
The result is that ADR never progresses.
The LoRaServer should be trying a different data rate for the LinkADRReq until it finds one that the device can cope with.
If the LoRaServer receives a LinkADRAns with the data rate ack bit clear, it should increment the data rate for the next LinkADRReq.
I suspect the origin of this problem may be that US902 DR0 (SF10, 125KHz) is numbered as DR2 in the AU915 scheme.
If AU nodes need to limit themselves to 400 ms, allowing SF11 or SF12 is probably a design mistake, as the whole reason US915 stays at SF10 or below is that above that you spend most of your packet time allowance on LoRaWan headers and can’t move much data in the remainder.
It may be that a TxParamSetupReq should be being sent first if the window is actually longer. Though that does raise the question of how node “state” is known in an ABP case where a node could well experience a reset and, at least if it has non-volatile storage of FCount continue unaware of previous settings, with the server unable to know that.
My understanding is that the dwell time flag can be cleared for uplink at some point in the negotiation.
We still have a problem that allowing a 1.02B or 1.03B device to request ADR will never succeed with the potential that the LoRaServer will be stuck transmitting LinkADR messages to nodes indefinitely to no one’s benefit.
As the LoRaServer knows (via configuration) the version of the end node and its selected regional parameters, it is possibly the case that the LoRaServer should modify its ADR behaviour according to those settings. Subject to checking, if version B of regional parameters is indicated, then minimum DR=2 for ADR.
Seems the solution is to manually set the “Minimum allowed data-rate” and “Maximum allowed data-rate” in the service profile. I tried setting the minimum to 2 and this didn’t work as the default value for maximum was 0.
Minimum allowed data-rate: 2
Maximum allowed data-rate: 6 (to include 500kHz channel) or 5 (for 125kHz channels only)