Factors affecting Rx delay

Dear community,
I want a minimum Rx delay for my application. To my understanding, the SF of the receive (RX1 and RX2) windows and gateway-to-network server connectivity (GSM/Ethernet) should be taken into account while configuring the Rx delay. Are there any additional factors we should consider when setting the Rx delay? What are the benefits and drawbacks of extending or shortening the Rx delay?

Thanks

You want to increase the rx1_delay when the round-trip between gateway to NS and back to the gateway (for the downlink) is too long for successfully scheduling downlinks (e.g. you will get errors that the packet arrived too late at the gateway to be scheduled).

Reasons to not set the rx1_delay to the maximum could be time drift of the device. If the device does not have an accurate time-source, it could open the receive-window too late or too early and might not detect the downlink, or if the devices knows that it should compensate for the time drift, it would mean that it will listen for a longer time and thus it will use more battery.

2 Likes

Thank you for your response @brocaar. As per my understanding, whether we set the rx1_delay to minimum or maximum, the time-drift would remain the same. How drift of the device could be a reason in the decision not to set rx1_delay to the maximum. Do you want to say that the time drift will increase as the rx1_delay increases?

Compare it to clock that you need to set by hand (e.g. an oven clock). After one day the time is probably still quite accurate, but after a few months it might have drifted by a minute or so (+/-). The same is for the downlink receive windows. After a few seconds, the time drift (assuming the device doesn’t have an accurate time source) is probably not so significant, but if you set it to the maximum it might become a problem with some devices. You could work around this in the firmware (e.g. widening the window in which you wait for the preamble), but that is going to cost a bit more power.