[Conduit with FTDI MTAC-LORA] Downlink not sent at each time

Hi !

I have currently an issue with the lora server related to the downlink emission.

My test : I have a device that send an uplink each 10 seconds. The consequence of this uplink is the emission of a downlink by the backend. (this is a test, in normal condition this message is sent once per day).

My infrastructure:
A loraserver with the 0.24 version (not the latest)
A multitech gateway with the lora-gateway-bridge installed on it
A lora module : mtac-lora (not SPI)

As I can see in the logs, the downlink is correctly generated by my backend, and transmitted to the gateway bridge correctly.
I am unable to read something on the ugly basic_packet_forwarder log
When my device does not receive the downlink, I don’t see any emission by the gateway on a SDR.

I have time period with 100% of the downlink received (during 5-10 minuts), but also period without any emission. In around 30 minutes, I have only 40% of packets received.


  • As I can see on the packet forwarder log, I have sometime the following error. This is maybe related, but I am unable to understand why… :

WARNING: [down] packet was scheduled but failed to TX

have you ideas about what should I do to identify the root cause, and fix this situation ?

Thanks a lot for your help


Hi Gilles. As you have the FTDI version of the MTAC card, you’re limited to an old packet-forwarder version which officially does not support any form of queueing (e.g. it implements a queue of size 1, everything that is sent to the gateway is overwriting this queue element).

I believe that Multitech added some patches to support (some form of) queueing, to work around this issue for the FTDI MTAC card. However, I’m not sure how robust their solution is.

If you have the option, I would try also with the SPI version of their MTAC LoRa card. At least then you’re able to use the latest packet-forwarder version without these patches and with out-of-the box support for queuing / just-in-time scheduling. FTDI support has been deprecated in the packet-forwarder since Oct 2015.

(fyi: I’m providing a vanilla lora-packet-forwarder package (SPI) for the Conduit, which also allows you to setup 16 channels by running two packet-forwarders in parallel, one for AP1, one for AP2: https://docs.loraserver.io/lora-gateway-bridge/install/gateway/multitech/).

Thanks @brocaar for your quick assistance.

I have seen the non-development of the basic_packet_forwarder and will see with the support how we can improve our situation.
if my packet forwarder don’t support the queuing, how others lora server handle the queue ? With a specific backend mechanism, and sent the messages above the water ?

I know than the packet-forwarder has been patched (the log from my first post come from the patch…)

ATM we have only FTDI MTAC card… we will look around the SPI to be able to fix our problem. the double packet-forwarder is very interesting, and I hope deal with it.

I will update this issue with the responses than I could find.

It does (using the Multitech patch), only I don’t know how well this implementation performs. Most gateways on the market today use SPI and are able to use the latest packet-forwarder which does implement queuing :slight_smile:

After discussion with multitech support, I have increase my RX delay to 5 seconds, to give time to the message to be received and pushed to the packet forwarder.

The downlink packets received by my device is now 97%, an acceptable value.

The SPI packet forwarder is maybe more powerful for that…

Ah, in that case the issue is maybe not the packet-forwarder, but the latency between the gateway and the network?

Maybe, but as I can see on the logs, the answer is really fast to the gateway bridge. but, without a reat timestamp system on the packet forwarder and the gateway bridge (only seconds loggued) it’s difficult to track this aspect.

from the gateway, the ping to the lora-server is at 12ms in average… and the answer from my backend is done in 80ms? normally, I have lot of time to generate and plan the answer to the packet forwarder :sunglasses:

Would you be able to give your test feedback on: