LinkADRReq issues overloads data payload

Hi,

I am trying to configure two different kinds of devices with two different requirements. US915 spectrum.

The first type of device I have is the Sanmina Temperature/Humidity sensor. It is provisioned only on the 2nd set of channels, and doesn’t seem to have the ADR option enabled.

The other device is a Tracknet device. It follows the LoRaWAN protocol more closely. It will transmit on all 64 channels. We have devices that are strictly on either the first set or second set (like the Sanmina) of channels so we have 8 channel gateways on both the first and second set. I have set the loraserver.toml file option enabled_uplink_channels=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15] to make the Tracknet work only on those channels. Having this option set (opposed to enabled_uplink_channels=[]) seems to cause problems with the Sanmina, seen below.

The Sanmina device will only send up LinkADRReq MAC commands in it’s payload after the first uplink. I will now see link_adr request not acknowledged error msg in the loraserver logs. Changing the enabled_uplink_channels option is the fix for this behavior, but results in Tracknet devices not functioning properly (transmitting on all 64 channels).

  • Why does the loraserver send down two LinkADRReq MAC commands despite the previous uplink having its adr bit set to false?
  • Is there a way to disable ADR options for a specific application, service-profile, or device-profile?
  • Is there a more proper configuration I can do to avoid this problem?

Thanks very much for your time.

In addition to being how the data rate can be commanded (either pre-emptively or in response to a node’s request) this is also the mechanism which LoRaWan (at least 1.0, didn’t check 1.1) offers to set the enabled downlink channels.

The spec would seem to suggest that this should be acknowledged, but is the lack of an ack actually causing an operational problem? Does the node use the new settings?

The Sanmina devices, from what I can gather, do not listen to the linkADRReq channel mask option. They are programmed to only transmit on the 2nd set of channels, and will ignore channel mask changing. They do not seem to completely ignore the MAC commands however, as evidenced by their uplink response containing two linkADRReq commands (all ack options set false).

Could you please point out what section of the spec I should be looking at? To help me better understand the problem. To my understanding the ADR options control not just the data rate but the channel mask as well. What I also assumed is that if a device has its adr bit in the fhdr option set to false, loraserver should not send down a ADR command.

It should not control the device data-rate, however it will still uses the LinkADRReq mac-command to configure the device channel-mask. Please see the LinkADRReq mac-command specification in the LoRaWAN specification document.

Okay I see I will look into that. Is there a way to prevent these commands from being sent down or is it simply part of the LoRaWAN protocol? Would you have any recommendations on this issue? Multiple loraservers/appservers maybe?

I am considering the possibility the firmware on the Sanmina devices cannot handle the proper LoRaWAN server implementation, but I would like to work around this if possible.

Seemingly, not filling in the channel list should do that.

AFAIK LoRaServer doesn’t actually need to know the enabled channels, other than to inform the nodes. At least in my region, the rules for picking a downlink reply channel depend on the uplink channel used by the node, not the ones configured.

That is correct, and applies to all regions: the downlink channel is a function of the uplink channel. For some regions downlink = uplink, for others downlink = uplink % X.

We are facing a similar issue with Netvox devices it would be great if you guys can help us find a way around.

I think it would be good to define “issue” in this case, as it seems there are a couple of questions mentioned in this topic.

When a (US915 or similar in terms of channels) device joins and it is not LoRaWAN 1.0.3 (which supports the channel-mask through the CFList), LoRa Server expects that it has all channels enabled. Therefore it will use the LinkADRReq mac-command (as also suggested by the LoRaWAN Regional Parameters) to tell the device to only use the enabled_uplink_channels.

Setting this channel-plan is totally independent from ADR, however it shares the same mac-command (which is in my opinion unfortunate).

So even when the device has ADR turned off, the device must acknowledge a LinkADRReq mac-command if the requested values are valid.

I understand, thank you for clarifying. A few vendors I’ve worked with have devices that only partially implement the LoRaWAN protocol, I can see now how that can cause compatibility problems. Would it be reasonable to ask for a feature request to support these devices? Some sort of way to disable the channel mask options for a specific application group or disable the LinkADRReq command completely per application. The config option disable_adr=true seems too broad of a setting.

I would do what cstratton is suggesting, but I believe this would break the devices I have that fully implement all 64 channels (US915 spectrum).

Is it normal for LoRaServer to issue a downlink two LinkADRReq instead of just sending down 1 that includes both the ADR settings as well as the channel mask?

Sometimes I see a downlink that has a channel mask that is 0x00 and ADR settings of Tx=0 and DR=0, and then the other ADR setting that has actual values (not just 0,0 for DR and TX power).


(the first channel mask shows all channels as false)

I believe this is correct, the first LinkADRReq turns off all channels, the second LinkADRReq turns on the channels that must be enabled. This can’t be done within one mac-command.

1 Like

That makes sense. When looking at the spec (this is the 1.0.3 version), it says that a NS may send down multiple commands like LoRaServer is doing.