Failure servicing - Connection reset by peer

I’m using the latest nexus (3.30.1-01) docker image in an azure openshift. A maven build from a jenkins job (jenkins in openshift too) sometimes fails because of random GET failures from nexus when connecting to Maven central (Central Repository:). If I relaunch the same build, it’s working…
The failing artifacts are never the same…
Hereunder the nexus log output:
2021-05-25 09:40:00,005+0000 INFO [quartz-8-thread-19] *SYSTEM org.sonatype.nexus.quartz.internal.task.QuartzTaskInfo - Task ‘Storage facet cleanup’ [repository.storage
-facet-cleanup] state change RUNNING → WAITING (OK)
2021-05-25 09:45:29,293+0000 WARN [qtp1679413351-1798] *UNKNOWN org.sonatype.nexus.repository.httpbridge.internal.ViewServlet - Failure servicing: GET /repository/maven
-public/com/opentable/components/otj-pg-embedded/0.13.3/otj-pg-embedded-0.13.3.jar
org.eclipse.jetty.io.EofException: null
at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:279)
at org.eclipse.jetty.io.WriteFlusher.flush(WriteFlusher.java:422)
at org.eclipse.jetty.io.WriteFlusher.completeWrite(WriteFlusher.java:378)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:119)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:298)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129)
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:383)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:882)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1036)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.writev0(Native Method)
at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:51)
at sun.nio.ch.IOUtil.write(IOUtil.java:148)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:503)
at java.nio.channels.SocketChannel.write(SocketChannel.java:502)
at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:273)
… 11 common frames omitted
2021-05-25 09:50:00,002+0000 INFO [quartz-8-thread-19] *SYSTEM org.sonatype.nexus.quartz.internal.task.QuartzTaskInfo - Task ‘Storage facet cleanup’ [repository.storage
-facet-cleanup] state change WAITING → RUNNING

Hi Eric,
Your stack trace, especially the part that says

EofException: null

suggests there’s an issue with network connection between your Nexus Repository and Maven Central. Possibly an issue with a firewall or reverse proxies in your network.

I would say the error is caused between the client and Nexus.

I don’t know if it helps but hereunder the log of the maven build running on the jenkins agent. (nexus and jenkins agent are running in the same openshift project and network).

[ERROR] Failed to execute goal on project common-test: Could not resolve dependencies for project com.transwide:common-test:jar:21.07-SNAPSHOT: Could not transfer artifact com.opentable.components:otj-pg-embedded:jar:0.13.3 from/to internal-repository (http://nexus-ci.apps.wzb0bvhg.westeurope.aroapp.io/repository/maven-public/): GET request of: com/opentable/components/otj-pg-embedded/0.13.3/otj-pg-embedded-0.13.3.jar from internal-repository failed: Connection reset → [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]

That’s also my question, is the problem between jenkins/maven and nexus within openshift or from nexus to azure/internet…

You have to look at whats in your network between your clients & nexus

The client is a jenkins pod in the same namespace/project as nexus pod, it’s completely internal to openshift and in the same network subnet. It looks really strange to me, as there is no other problems in the openshift cluster…