- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
silly judgemental post not meant to be taken too seriously (unless you agree with me in which case im dead serious)
silly judgemental post not meant to be taken too seriously (unless you agree with me in which case im dead serious)
How does lemmy run for you? I get weird blocks of bad gateway every once in a while
It runs perfectly fine most of the time and then will occasionally lock up my entire server until I reboot.
I’ve been working on getting some better monitoring and log aggregation set up so I can troubleshoot what is actually happening but it’s a bit slow going. As of now I can’t tell if the database is getting overloaded, if the frontend is getting spammed, or what is going on really.
My instance has two users and it runs on a VPC with 2 CPUs and 4GB of RAM.
Check the ram usage on postgres. Theres a memory leak issue thats being monitored with a proposed fix in the next version (which is upgrading to the newer version of postgres)
Thank you! I was secretly hoping someone might have a quick suggestion of something to try. I’ll see what I can find out.
Yeah no problem! My workaround solution is simply just restarting the postgres container when i notice ram usage spiking
Usually by the time I notice the server is already unreachable over SSH but I’ve been considering adding manual healthchecks to my containers. Paired with the docker-autoheal project it’s been a really low effort way for me to keep services healthy without a lot of babysitting. I’m more nervous about implimenting it for something like a stateful database though, but I suppose it’s no different than manually issuing a docker restart command.