How to get data from a sensor?

I deployed a server. But I can’t figure out what to do to get data from the sensor. There is a socket, there are api methods.Using the method GET [/api/devices/{dev_eui}
I am not getting data. What do i do? I’m new to this business. I am writing in python.

1 Like

Did you install latest Chirpstack (or I can’t help)? Did you add at least 1 gateway? Did you create at least 1 application and then added at least 1 sensor? These are prerequisites to get any data, unless you’re running simulation (I can’t help with that).

Under application, are any sensors showing “last seen” that isn’t “n/a”? If so, click on device name and then look at “lorawan frames” tab - any JoinRequest & JoinAccept followed by “DataDown”?

If yes, switch to “device data” and then you can see are using off-the-shelf sensors, why bother?

I create device profiles for my sensors and paste in decoder function under CODEC tab.

Then, when new devices are created with this profile, the data is decoded for me.

You can find codecs for sensors from vendor’s support pages or from projects. Example.

What you described are the live data packages you can see in the application.
The question seem to be how is it possible to access the data stored in the PostgreSQL database Table “device_up” where in the column “object” the decoded JSON data is stored) via the ChirpStack API.

That is actually the same issue I’m currently facing.
I would like to access this information as well and I need to be able to specify a time interval to access the data.
Give me all the received data for device_eui = XXX between TIMESTAMP_A and TIMESTAMP_B.
I also need to be able to get the data for a device_eui like “return the last 10 packages received before NOW or before TIMESTAMP_X”.

Is this possible? Pointing to the API would be great as I’m lost here as well.

You can NOT. The data is published to the broker and if there’s not a client subscribing to that data stream, the data is simply lost - like the proverbial tree falling in the forest without someone to hear it fall. The PostgreSQL is storing data for only moments, that’s why it can run in a turn-key box without running out of disk-space. You should consider data received 15 minutes ago lost forever*.

This requires a separate application subscribing. This can be done in many ways. In my case, data is both subscribed by a Node-RED flow that saves it in TXT files - comes in handy for debugging. Also, my SCADA subscribes for the few dozen sensors it need - that goes to its built-in data-historian as process data.

*Only caveat is that if sensor is configured for ACK and something happened to gateway & it doesn’t get its ACK, some sensors will store some quantity of messages on-board. I know Laird RS191 sensors do this.

Thank you for the reply.

At least the statement above that PostgreSQL is storing the data only for moments is incorrect.
If you setup the PostgreSQL database integration as described in the ChirpStack documentation the data is store “forever” until you send an SQL command via your preferred SQL tool to delete the content: PostgreSQL -

I have currently 50.000 entries in the “device_up” table which can be found in the ChirpStack AppEvent Server database.

Basically it’s the same functionality TTN offers with their Storage Integration (Storage Integration | except hat ChirpStack is missing the API to access the data.
Beside the missing Storage API it would also be a good idea to add database partitioning (PostgreSQL build-in feature) and/or the “external” timescale extension ( Time-series data simplified | Timescale) which would make it easier to handle the timeseries data. TTN also seems to use Timescale. With that an automatic retention policy can be applied pretty simple.

My main issue for the moment is the missing API similar to what TTN is offering ( Retrieve messages | (

I was not aware that this feature exists in ChirpStack - thank you.

Unlike you, I need a way to shut this functionality off or at least cap the diskspace thus used, because most of my servers have a small SSD that I don’t wish to fill up.

As one currently must intentionally turn this on, by default, data is still “lost forever” after a short while. This is my expectation (and desire :slight_smile:)

By default only the mqtt option is set which is your “temp” storage.

As I and most likely other people as well need to be able to access the data for a longer time (in my case currently electric meter data,…) to be able to use the information for invoicing the actual long time storage is required.
As I mentioned the timescale extension on top of PostgreSQL allows you to define when the data is automatically removed from the database ( Create a retention policy | Timescale Docs).
We are using this for AIS data (position data of ships). Compared to the small data packages arriving via LoRa here the amount of data is more of a problem.

Just for reference here the table statistics on my test installation for the electric meter readings…(note the estimated rows are a little bit lower compared to the actual numbers - details below the table).
As you can see the “diskspace” is not really an issue (50 MB for 50000 entries - depends on the decoded JSON data at least to some extend - the data is stored as PostgreSQL JSONB type which is the optimized version to do it).

table_name row_estimate total table index toast
device_up 47872.0 53 MB 35 MB 18 MB 8192 bytes
device_error 20827.0 7152 kB 4672 kB 2472 kB 8192 bytes
device_join 0.0 96 kB 8192 bytes 80 kB 8192 bytes
device_ack 0.0 96 kB 8192 bytes 80 kB 8192 bytes
device_status 0.0 48 kB 0 bytes 40 kB 8192 bytes
device_location 0.0 48 kB 0 bytes 40 kB 8192 bytes

In "device_up" there are actually 49979 rows.

device_name number_of_entries
test_node_type_1_internal_1___5_min_interval 29625
test_node_type_1_proto_1___15_min_interval 10282
test_node_type_1_proto_2___15_min_interval 9934
test_node_type_1_internal_2___15_min_interval 65
test_node_type_2_internal_3___10_min_interval 55
test_node_type_1_internal_4___15_min_interval 18

All nodes send electric meter reading values read via modbus.
The first node sends data all 5 minutes (active and inactive due to some internal tests…timeframe round about 4 month), the two prototypes send the data all 15 minutes (active for about 4 month now).

For the “device_error” table most of the errors are “frame-counter did not increment” messages. Actually the frame counter did increment, but the device did send the data two times (or ChirpStack received them twice - seems to be something I need to address in the Dragino RS485-LN which is used to read the modbus registers: it is most likely the AT+RPL setting I need to update).

As I was getting duplicate data from some (not all) of the nodes I added a database constraint which prevents storing the same data multiple times (this is active for about 2 month I think). So I compare the important columns and when it is the same and the “received_at” timestamp only differs some millis additional rows do not get stored in the database, so I only have one new entry in the “device_up” table). ChirpStack does not know so it still writes error messages into “device_error” for the duplicates.

Hello Anna ! Now I have the same problem and i had read what people recommended to use integration BD.
Did you decide this question ? If Yes? can you help me ? my email to contacts