My application generates downlink packets when it receives uplinks. It does not queue them ahead of time. Is there a way to determine the number of bytes available in the downlink once you have the uplink?
I know you can get a rough idea based on the uplink data rate, the rx1_dr_offset and the regional parameters, but I haven’t found a way to get the rx1_dr_offset programmatically. Is there a way to do this? I’m currently hardcoding this, but that’s not ideal.
Additionally, as I understand it, MAC fOpts payload data may be bundled with the downlink and reduce the number of bytes available for the downlink message. Is this correct? Is there a way to determine how many bytes are going to be used for fOpts?
This is tricky, because there is no guarantee that a downlink is sent immediately as a response of a Class-A uplink. In the latest ChirpStack v3 versions (and also ChirpStack v4), mac-commands get priority over application payloads. If ChirpStack can’t sent both, then the application payload will stay in the queue.
Technically, the next uplink could use a lower DR which would mean that the downlink response could have a different max payload than earlier downlinks. The opposite could also be true.
It could be pending mac-commands, but it could also be a gateway unable to handle the downlink due to a collision with an other downlink that is scheduled at around the same time (or Class-B beacon), in which case the downlink stays in the queue until the next downlink opportunity.