Hi, I’m new here and I need your help with my graduate thesis. I’m searching info about Adaptive Data Rate and how it works. I understand that to determine the optimal data rate, the network needs some measurements (uplink messages) something like 20 most recent uplinks, starting at the moment the ADR bit is set, for each of these measurements, it takes the SNR of the best gateway, and calculate the “margin”, which is the measured SNR minus the required SNR to demodulate a message given the data rate. I know that this margin is used to determine how much we can increase the data rate or lower the transmit power, but I don’t undestand how? When the algorithm decide if we need to increase DR? and if I want to do reverse engineering starting with a dataset of measurements to understand how adr make decision on that network what are the data I have to analyze? I have created charts using snr, spreading factor, rssi and the margin (SNR minus the required SNR to demodulate a message given the spreading factor) but I have not found any correlation yet, what am I doing wrong?
Any information can be useful.
Thanks