Clear devNonce from database?

Is this possible?
can i clear devnonce from database? i understand thar nonce cannot be reuse… for security.

I also know that if i remove NODE and redeploy, nonce database will clear.
Can i still do this:

Yes, I think I’ll implement this in a future version :slight_smile:
For now, clearing the used_dev_nonces column (node table in the database) would do the trick.

its workaround from 2017

To my knowledge there is no API-based way to do this yet. I go into the database and clear them out when a test firmware has gone rogue.

Thanks… but i cannot find table “node”…

where did you clear the old nonces?

It seems that even if i delete device and redeploy (if the “new” KEY OTAA are the same) i still have:
error:“validate dev-nonce error”

Ok so ity seems dev_nonce are in this device_activation

can i flush this table? Truncate device_activation

i understand that nonce are for security … but in test NS with many device … flush used nonce are needed for testing.

its not for production

Yes, I believe it is intended that nonces are persisted even when a device has been removed and re-added.

That looks like the right table. I would recommend only wiping for the dev_eui you need to remove, versus wiping the entire table. But both approaches would work.

Thanks … i did clean the dev_nonce, the table was 500K row in it

much better result in JOIN now … but i stil have some error “validate dev nonce error” in some nodes.

so the “validate dev nonce error” are not only duplicate nonce. it must be something else?
is there some log i can check for more information…

my error are display with “mosquitto_sub -t “application/#” -v”

"“type”:“OTAA”,“error”:“validate dev-nonce error”}"

do REDIS keep track of dev-nonce?
can i also “flush” REDIS? to check if problem get solve

redis isn’t postgre sql where dev_nonces stored for each Dev_EUI

Here is how we clear it:

It is unfortunate that a 32bit nonce is not used in the specification, since it is used only during join session. It would save us from a lot of issues.

I don’t think your approach of “regularily re-joins to avoid problems with up/down counters” is going to be very feasible in the long term and would recommend reconsidering it.

can you develop a bit, please?

I will give more context here to discuss on…

We used to use ABP, but up/down counters management was a pain to deal with… my device is an actuator, with periodic reporting of its state, not a class A sensor, just to give more context…

With ABP, we had only strictly growing counters, but a serious problem occured, when there was either outage of the network server, or worse, something did not go well after for example an update and we did not have the information about current state of the up/down counters.

With OTAA + join procedure, this problem is gone. The device is using unconfirmed messages to report its state, but from “time to time” it send a confirmed message and keeps sending confirmed messages up until it gets an acknowledge back. After a certain threshold it decides, it got “disconnected” and re-tries to join.

We also introduced periodic re-joins with a period of a week or so, to be sure the device eventually joins, even if the mechanism mentioned above does not work for some reason. This gives theoretically ~60 rejoins per year, which is ok, if our limit is 65535, but there is a limit…

That is why I strongly consider using the clearing, but maybe I could increase the period from 100days to something even higher. More “nonces” get for example consumed, when the join fails or when there is some longer outage of gateways in the network or worse, outage of the network server - in this case, the device re-tries joining with a certain period and consumes more than ~60 nonces per year.