Postgres connections grows - possibly FUOTA

Hello, we’ve faced a strange behaviour a few times.

Amount of connections done by both application server and network server components grows over time, exceeding like 100+ active connections.
pg_stat_activity consists of things like:

select * from device where dev_eui = $1
select * from device_queue where dev_eui = $1 order by f_cnt limit 2
select * from routing_profile where routing_profile_id = $1
select * from integration where application_id = $1 order by kind
COMMIT  (x42)    - could it be some kind of locks? lots of strange old COMMITs
select * from application where id = $1    (x7)
select * from device_queue where dev_eui = $1 order by f_cnt    (x8)
update device set last_seen_at = $2, dr = $3 where dev_eui = $1    (x4)
select * from gateway where mac = any($1)    (x4)
select network_server_id, organization_id, created_at, updated_at, name, payload_codec, payload_encoder_script, payload_decoder_script, tags, uplink_interval from device_profile where device_profile_id = $1    (x4)
insert into device_queue ( created_at, updated_at, dev_addr, dev_eui, frm_payload, f_cnt, f_port, confirmed, emit_at_time_since_gps_epoch, is_pending, timeout_after, retry_after ) values ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12) returning id    (x4)
select pid as process_id, usename as username, datname as database_name, client_addr as client_address, application_name, backend_start, state, state_change, query from pg_stat_activity

Some queries are in idle for days, often grouped by time, like as if they were launched at ~same time.

It feels like it correlates with FUOTA launches, but not sure (we are using https://github.com/brocaar/chirpstack-fuota-server/blob/master/examples/class_c/main.go).

Coincidentally, there are also a few multicast groups visible in application server. If I recall correctly they usually ‘cleaned’ automatically, but these exist for some time already.

application server: 3.17.0
network server: 3.15.0

What are your postgresql settings in the TOML file? You could limit the max. number of open connections in this section (see max_open_connections).

max_open_connections limiter sounds extremely useful, thank you - will test it.

Do you have by any chance a reasonable range for the value, from your experience, when lower/higher values aren’t really helpful?

The PG library was updated with 3.15.1 / 3.17.1. We had a problem with stale connections. Maybe upgrading to 3.15.1 / 3.17.1 could help too?