For some reason, it just eats up all my systems’s memory despite of the --memory 16g flag. I restarted my host last night, and today I see this. htop shows a bunch of PIDs from user 200 that use 5G memory.
$ free -g
total used free shared buff/cache available
Mem: 31 22 5 0 3 8
Swap: 39 0 39
I have the same problem (running it in a container on Kubernetes/Docker) since I’ve upgraded to CentOS9. Nexus (v3.42.0) eats up all the memory after a few days till it gets killed by the OOM killer. I’ve set the Java runtime parameters according to the documentation but the memory went up till 70Gi again.
This isn’t a widely reported problem we’ve heard of, if you’re a pro customer you should open a ticket.
As an OSS user you’d need to do a heap dump and use an analyzer try to figure out what is consuming the memory.
You should be careful if your instance is getting regularly killed as that can lead to database corruption. It may be worth looking into repairing it now as perhaps some corruption there is leading to the out of control memory usage.