Receive window selection

Dear all,

When I use OTAA mode, I am always getting downlink data on RX1. Is there any configuration parameter to receive downlink data on a chosen receive window (RX1 or RX2) without switching to ABP mode?
Is this behaviour related to the end-devices or to the loraserver?

Best regards,
Samer.

Currently this can’t be configured from the user interface. The reason is that I want to improve the LoRa Server scheduling in the future, where it will select either RX1 or RX2 (based on timing, availability of gateways, etc…). So then the RX1 or RX2 selection happens just before transmitting and not as a configuration.

Ok, thank you for you response. By the way, the Arduino LMIC library mentions that RX1 has not been tested for downlink reception, but in my configuration it is working very well.

What has not been tested: Receiving downlink packets in the RX1 window.

1 Like

suppose i set RX window manually using my own algorithm. So will it be application specific or device specific? i mean to ask that suppose I set RX2 manually… So will it persist for that device only or for the all the devices under that application?

OK, so if it can’t be configured, how does it currently work?

It’s my understanding that the node will be given the RX2 frequency/DR as part of the JOIN (and that it determines the RX1 frequency/DR based on the rules for the given channel plan). Is that correct?

If so, then the node will always listen during both RX slots, right? But in the current loraserver implementation, which is actually used?

For Class-A downlink, LoRa Server currently only uses RX1. For Class-C it will use the RX2 settings (as communicated as part of the OTAA join-accept message).

Huh. So for Class C devices, all downlinks use the RX2 frequency (since there really isn’t an RX2 “slot” for Class C, right)? Do you ever plan to support RX2 for Class C devices? I’m not specifically wanting it, I’m just curious. Can you provide any insight on this decision?

What I mean is that for Class-A devices, LoRa Server currently only uses the RX1 receive-window. My plan is to support a RX1 / RX2 switch or RX2 fallback (when scheduling on RX1 fails, the packet-forwarder sends a nack on error) for Class-A. The challenge here is that RX1 and RX2 have different constraints (e.g. the payload that you can schedule using RX1, might not fit for RX2 because of different data-rate). Also when using RX1 you benefit from frequency hopping where RX2 uses a fixed frequency. But this preference can be made configurable through loraserver.toml.

Class-C devices use the RX2 parameters, so that is why you are able to configure these parameters in the loraserver.toml configuration.

What is a bug in which interface?

@brocaar I have a type of node that just doesn’t want to join on my network and I suspect it might be the fact that this stack only listens to join accepts in RX2. At least I can see in the server that it transmits a join accept on every join request, and I can see on my spectrum analyzer that the gateway is transmitting the join accept exactly 5 seconds after the uplink. But the node just goes in timeout and keeps resending the join request.
The maker claims his device works perfectly well on KPN and TTN, but as my experience is, these networks use mainly RX2 for joining.
The question I am getting to: do you foresee on reasonable term that some kind of scheduling of subsequent join accepts between RX1 and RX2 is added, just to accommodate the fact that there might be nodes out there with an incomplete stack when it comes to the join process?

1 Like

This has been implemented (master branch) and is now configurable in the loraserver.toml config file:

  # RX window (Class-A).
  #
  # Set this to:
  # 0: RX1, fallback to RX2 (on RX1 scheduling error)
  # 1: RX1 only
  # 2: RX2 only
  rx_window=0

By default RX1 will be used. When a negative ACK is received from the gateway, LoRa Server will retry using RX2 parameters (given the payload fits within the limitations of the RX2 data-rate).

Hi!
I set rx_window to 2, but in my logs on base station delay between uplink and downlink(with Mac paramaters response) slill 1 sec. Is that correct behaviour?

Not quite sure, but I think I can confirm the strange behaviour.
My loraserver (3.2.1) is configured with

rx_window=2

but it looks like it only uses the first window.