First I would love to say how much I value this project, I have been toying for some time and it’s truly amazing!
I am currently deploying the app-server in a containerised setup using balena and a Raspberry Pi 3. Since it wasn’t possible to build it from source in an alpine (due to some library error-gobindata) I am simply using a debian image and installing it using apk.
The problem is that I am trying to squize several services into the raspberry pi and I need to optimise each and every service. Thus, I was wondering whether it was possible to disable the graphical interface of the app-server and use only the api endpoint.
Any other suggested optimisation that can lead to a reduced memory hungry app-server will be much appreciated!
You got me, it isn’t memory-hungry per se, but for my use-case I really need to optimise it as much as possible. I only need it to manage 1 device (receive uplinks).
I’ve got LoraServer running without issue on a pi including all of the components. It’s basically a gateway that contains the gateway-bridge, network-server, and app-server and whist it’s tight, it does work
Thanks @Proffalken for your reply! I know it’s possible, but for several architectural reasons, currently all the loraserver infrastructure runs on a seperate “gateway” device and I only run the app-server on the raspberry pi. I was wondering whether users here now any tricks on how to optimise the app-server, ideally turning off all the web interface.
If the only thing you’re running on the pi is the app server, it’s highly unlikely you’ll need to switch the web UI off, in fact I don’t even think it’s possible!
What else are you running on the Pi that you need to strip the web UI out?
Fwiw, I bought a Bitscope Blade Quattro a few months back and mounted 4 Pi’s on there - one for loraserver, one for influxdb, one for grafana, and one for our farm management solution. The one running loraserver runs Postgres, Redis, the network and app servers, and doubles as a router/gateway for the rest of the network, so I’m really not convinced you’re going to have the issues that you think you might!
I am not saying that lora-app-server is hungry, but I am trying to cut corners on every service so as I can add even more services that are needed for my use-case. In canse I don’t manage, I will be splitting the application logic in two devices.
**Currently lora-app-servers receives an uplink every 5 minutes and pushes it to an online server http integration.
ok, so you’re running Resin.io, docker, and a load of other things on that box, which means that you’re overloading it already.
I’d humbly suggest that getting rid of the lora server UI is not the solution to your problem, as even if it was possible all it’s serving is static Javascript to interact with the various API’s so it’s going to have minimal impact on that system.
I’d suggest either running this across multiple Pi’s, ditching Resin/Balena (even though it’s something that I love and have blogged about in the past!), or moving everything out of Docker containers and running them on the bare metal.
Failing that, either shell out for a Pi4 with 4G RAM, or look at a cloud provider such as DigitalOcean.com where you can get a more than adequate VM for under $10/month.
I’ve spent the last 20 years working as a Systems Administrator and Infrastructure Consultant, and you’re running too much on that Pi…
Thank you so much for the time and effort you put into the reply.
I am using a raspberry pi because the project is in fact part of my master’s thesis in the area of edge computing. I managed to get Rpi running smoothly by deleting lora-redis and lora-postgre and configuring app-server to use the redis/postgres of the network server (which is located into a different board).
Since I have another 2 minimal services (small python scripts) I think I will be able to make it work without further distributing the architecture. If it is ok, would it be cool to contact you with DM for feedback on my setup? What I can do to improve/optimise without compromising important parameters of my use-case.