A number of people have posted questions regarding the blog post below, and I have the same questions/problems.
Specifically:
Client.Timeout exceeded while awaiting headers
“disable-legacy-registry” is not attribute the daemon supports
Ultimately, I need to push a newly created Docker image to the Nexus Repository, however the login step in the blog post fails, as other people have noted. Perhaps there is a missing step that opens the port…? I installed the Docker image using a Helm chart into a MicroK8s node. Thank you.
Hi @fnbrier and welcome! Thanks for calling this to our attention.
I would suggest referring to our Help docs, guides, and Sonatype Learn as this blog post is a bit outdated now. There is a call-out to this at the top of the page but it’s easy to miss and might need further clarity. I’ll look into this.
Thank you Maura. Based on your link, I changed the docker-group and docker-private to use https instead of http. I also added entries to the repo-values.yaml for their registries. External Endpoints are displayed, but non-functional. Unfortunately, the forum is saying I (a new user) cannot have more than 2 links. My attempt to include the repo-values.yaml and docker login commands with their error messages would not work, however the error messages are the same.
The registries now have external endpoints via the load balancer. The Nexus webpage is visible. However the Internal Endpoints are shown as nexus-repo-nexus-repository-manager-docker-8082:8082 TCP with External Endpoint as 192.168.1.181:8082. I am guessing that the IP address for the External Endpoint is being assigned as a new DHCP assigned IP to the internal nexus-repo-nexus-repository-manager-docker-8082 host name, instead of using the solar-nexus hostname, or 192.168.1.145 IP address. Both docker login commands fail.
Thank you Matthew. The forum wasn’t letting me post the error messages.
docker login -u admin -p admin123 192.168.1.145:8082
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get “https://192.168.1.145:8082/v2/”: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
docker login -u admin -p admin123 192.168.1.181:8082
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get “https://192.168.1.181:8082/v2/”: dial tcp 192.168.1.181:8082: connectex: No connection could be made because the target machine actively refused it.
I suspect that a different hostname, other than solar-nexus, is being used for the docker-group and docker-private registry settings, which is then being assigned an IP using DHCP, and then used for the External Endpoint. Of course, there is no server listening at that address. My next step might be to start reading through the Nexus Repository Manager source code. I was hoping that someone had already bumped their head on this, as there is a Helm chart and this was a recommended setup (as per the blog).
The External Endpoint now shows the correct IP address. The previous error message:
PS C:\Users\fbrier> docker login -u admin -p admin123 192.168.1.145:8082
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get "https://192.168.1.145:8082/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
With this change becomes:
PS C:\Users\fbrier> docker login -u admin -p admin123 192.168.1.145:8082
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get "https://192.168.1.145:8082/v2/": dial tcp 192.168.1.145:8082: connectex: No connection could be made because the target machine actively refused it.
I am guessing (not knowing Helm) that adding after line 51 in helm3-charts/charts/nexus-repository-manager/templates/service.yaml:
{{- if $.Values.nexus.loadBalancerIP }}
{{ toYaml $.Values.nexus.loadBalancerIP | indent 4 }}
loadBalancerSourceRanges:
- 0.0.0.0/0
{{- end }}
Would generate the required service.yaml. Unfortunately, the above error is preventing login. The service.yaml has both a port and a nodePort attribute, which it probably should not.
I suspect that the problem is in the generation of the service.yaml files for each Values.nexus.docker.registries entry. Port 8081 has the correct IP/clusterIP (10.152.183.145), but examining the service.yaml in the Kubernetes Dashboard, 8082 and 8083 have different clusterIP(s), 10.152.183.247 and 10.152.183.203 respectively. I have a dnsutils pod set up, which I tty’ed into. I then telnet’ed to 10.152.183.145:8081. That works. telnet’ing to 10.152.183.247:8082 fails. telnet’ing to 10.152.183.145:8082 works. This would indicate that the generated service.yaml from the template is incorrect and that the load balancer is attempting to connect to the wrong IP. The helm3-charts/charts/nexus-repository-manager/templates/service.yaml template does not have lines for the clusterIP attributes.
It must be generated implicitly. I will try and learn how those attributes are generated. Kubernetes won’t let me change them once they are set, in order to test them before going to the trouble of modifying the template.
Here are a few suggestions to help narrow down the problem. If you haven’t already done so, please can you list the ingresses and the services using: kubectl get ingresses --all-namespaces and kubectl get services --all-namespaces ?
Then describe each service and associated ingress using kubectl describe svc <service name> and kubectl describe ingress <ingress name> respectively? The aim is to check that the ingress and associated service contain matching IP and ports.
For example, for each ingress the backend ip and port displayed when you do kubectl describe ingress <ingress name> should be the same as the ip and port for the corresponding service when you do kubectl describe svc <service name>
Also the endpoint ip and port displayed for each kubectl describe svc <service name> should be set
@ Olu Shiyanbade
Below are the commands you requested I run. Perhaps I am misunderstanding the Nexus Repository architecture, but my current understanding is there is only one Nexus process. What the output of the commands seem to indicate, is that while the exposed ingress ports are all on the exposed 192.168.1.145, the 8082 and 8083 ports are mapped to different (bogus) internal cluster IPs. The 10.152.183.145 is the correct cluster IP for the Nexus process. It is possible to connect to that cluster IP on ports 8081, 8082, and 8083. However, it is not possible to attach to the cluster IP/ports: 10.152.183.247:8082 and 10.152.183.203:8083. My guess is that the Helm chart is creating bogus host names and that the internal DHCP is assigning IP addresses to server processes that don’t exist. While I experimented with changing the Nexus Helm chart templates and running the tests (which required setting up Docker Desktop and Minikube), I was not successful. Any assistance would be appreciated. I apologize that I missed your response; I was coding another software component. Thank you.
@fnbrier Continuing your question from the other thread - pricing is dependent on the solution you’re looking to purchase. Let me know which you’re interested in and I can connect you with the right person. I will also see if I can get a follow up on your question above. Thank you!