Please help to analyze the reason of abnormal shutdown

We are using Sonatype Nexus Manager OSS 3.20.1-01 in AWS EKS for production.

Resource spec is
cpu: 4000m
memory: 60Gi

ENV:
INSTALL4J_ADD_VM_PARAMS: -Xms12G -Xmx12G -XX:MaxDirectMemorySize=35158M -Djava.util.prefs.userRoot=/nexus-data/javaprefs

AT 11/11 the nexus pod was stopped after [java.lang.OutOfMemoryError: GC overhead limit exceeded] occurred in log.
Unfortunately, we have not ready to monitor jvm metrics.
But after analyze pod metrics, only 45.4GB memory is used when pod shutdown happens. CPU usage was almost 100%.

Before OutOfMemoryError log in log file, we got a lot of elasticsearch jvm gc log messages.
At 2022-11-01 04:47:04,541+0000, {[young] [1.1gb]->[19.4mb]/[1.3gb]}{[survivor] [732.7mb]->[0b]/[1.3gb]}{[old] [7.4gb]->[4.7gb]/[8gb]} → which seems normal
At 2022-11-01 05:01:53,755+0000, {[young] [2.7gb]->[15.8mb]/[2.8gb]}{[survivor] [437.2mb]->[0b]/[537mb]}{[old] [7.6gb]->[7.9gb]/[8gb]} → which seems abnormal
After this time of gc, all gc log showed abnormal.
Until 2022-11-01 05:58:28,374+0000, {[young] [2.8gb]->[2.8gb]/[2.8gb]}{[survivor] [0b]->[0b]/[537mb]}{[old] [7.9gb]->[7.9gb]/[8gb]}

Then, OutOfMemoryError occurred.

After pod restarted, service was recovered.

But at 11/16 the nexus pod was stopped again. There was no elasticsearch jvm gc or OutOfMemoryError occurred this time but received SIGTERM.
Before SIGTERM, Nexus was running task of [docker - delete incomplete uploads] for about 30 minutes.
Between task started and SIGTERM, Java exceptions can be found several times.
And a lot of [org.sonatype.nexus.blobstore.file.FileBlobStore - Attempt to access soft-deleted blob path] log messages can be found too.

Is there any correlation between this two times shutdown.
Please help us to locate unsuitable settings for nexus deployment.

Thanks.

BR