Protobuf encoded uplink/downlink payloads

I’ve been quite frustrated by the lack of a standard between LoRa device manufacturers in their uplink & downlink payloads. It’s reached a point that The Things Industries is doing their best to solve the problem by managing a registry which needs to be continually updated as new devices are created.

Well established manufacturers (looking at you Tektelic for example) do offer a Javascript decoder, but that’s it. What about all the systems in other languages? The best they (and others) offer is a 60 page PDF with difficult to understand tables for custom binary encoding schemas. Trust me, I’ve asked - “Why don’t you supply a Python” decoder?

Am I the only one who feels like there must be a better way? It’s terrible DevEx and I think holding this technology back.

What’s silly is that there is already a pretty awesome tool for encoding extremely small, binary, schema-ful messages - used widely across the web. Efficient over the network and easily compiled for multiple languages, well supported and open source - Protobuf.
Bonus, manufacturers could simply supply a language neutral .proto file with each device and developers could easily compile a decoder in their favorite language in minutes.

Unless I’m missing something it seems silly to not be working towards a standard in payload encoding/decoding in this space.

Am I the only one who feels this way? Is anyone doing this in practice? Production?

Side note, I see that Meshtastic is using protobuf for some payload definitions which seems like a step in the right direction. What else am I missing?

And yes… I’m familiar with CayenneLPP. It’s cool, but I wouldn’t classify it as a standard.

For a device payload? And you want to run a protobuf encoder on a microcontroller-based device?

Protobuf is a nice format for sending between powerful (i.e. gateway and up) systems, but it’s still too large when I need to send 10 pieces of information in 11 bytes, which is frequently the case for custom-built LoRaWAN devices.

Caveat: I will readily admit most of my LPWAN work is with devices designed and coded in-house, and not off-the-shelf hardware that have a PDF you can download from a web site.

Yes, there’s actually a handful of light-weight Protobuf encoder/decoders sufficiently optimized for microcontrollers.

One of the more popular libraries: GitHub - nanopb/nanopb: Protocol Buffers with small code size

Google’s Pigweed supports their own protobuf library pw_protobuf - Pigweed as well as nanopb - Pigweed

There’s also:

I get the argument that it might be a little expensive still to encode/decode on the microcontroller side, but it appears as though chip efficiency is increasing faster than network (and time on air) restrictions are decreasing. So if the two lines haven’t intersected yet, I imagine they soon will.

Granted, that doesn’t speak to the point you made about ultra-lightweight custom binary encoding, but I imagine that if the tradeoff was squeezing an extra 10-20% out of a payload with a custom encoder vs standardizing an interoperable binary payload format I feel like it’s a fair trade.
Also, I imagine (definitely not certain) that after you Base64 encode it like the LNS wants those custom savings are negligible.

I think it usually comes down to the use case. In my experience (and my clients), power consumption trumps everything else. When you’re deploying a device in a remote location with a battery that is expected to last 2, 3, or 5 years, the encoder should be trivial and the radio should be turned on for as short a time as possible.

If you’re on AC power, sure, go wild.

@bconway Since you’ve been writing your own encoding schemes, you know more about this than I do.

Question for you, what is the expensive part - are you saying that you believe encoding protobuf messages would take measurably more compute time than custom binary encoding, thus reducing battery, or that protobuf messages are larger and therefore spend more time transmitting, also reducing battery…
Or both? :slight_smile:

Much of sensor data are bit flags or small uints.
Usually you will want to pack several values into one byte.

Protobuf is a key value pair, so if a key is one byte, you would need to double the message size of raw byte values. The key names would be the same byte values in each message, no reason to send them.

Multiple values in one byte would need a decodec anyway. You could mask the data into a protobuf byte array and decode using the protobuf library. This would fill the changing data bytes.

Since the format will be identifiable based one the device EUI attached to the message a decodec can be a database lookup away.

To get the smallest time on air, pre-shared info is used for security and data format.

Of all IoT problems de/coding is almost trivial. Of course a one-time cost of dev and public repo would be nice. Many transpilier are available to convert langs.
Getting 5+ years out of a coin-cell requires trade-offs.


Thanks @Jason_Reiss for the reply.

Perhaps I’m misunderstanding protobuf, but doesn’t having the already schema known by both the sender and receiver of a message mean that the keys do not need to be sent?

Also, I’m realizing I should probably better understand custom encoding - for example, you mention packing multiple values into a single byte. Are there any good guides/resources for designing a custom encoding format?

There is additional type info encodes as well.

When a message is encoded, each key-value pair is turned into a record consisting of the field number, a wire type and a payload. The wire type tells the parser how big the payload after it is. This allows old parsers to skip over new fields they don’t understand. This type of scheme is sometimes called Tag-Length-Value, or TLV.

TTN has a thorough guide for device payload considerationa.


Thanks @Jason_Reiss that TTN link was very useful, not sure how I missed it before.

Looks like I need to better understand the Protobuf protocol too.

Hi there :wave:,

As device manufacturer, one of our challenges is to make the shortest payload possible. Each bit (not byte) counts. The longer the size of the payload is, the more the device consumes.

In our case, the payload can change depending on parameters (let’s say they are different modes, which is the case for many sensors). So the decoding will be different depending on those parameters.

So if we want to keep a “fix” structure of the message, we would have to provide a very long payload that can handle all the cases. In practice, this is not possible (the payload would be too long), so we optimize it for each mode. That means that one bit in the message can have different significations depending on the modes.

This is one reason why it is more conveninent to make the decoding on server side (there are actually others).

I hope this helps,