Outstanding questions in Sonatype blog about Docker images

A number of people have posted questions regarding the blog post below, and I have the same questions/problems.

Specifically:

  • Client.Timeout exceeded while awaiting headers
  • “disable-legacy-registry” is not attribute the daemon supports

Ultimately, I need to push a newly created Docker image to the Nexus Repository, however the login step in the blog post fails, as other people have noted. Perhaps there is a missing step that opens the port…? I installed the Docker image using a Helm chart into a MicroK8s node. Thank you.

Hi @fnbrier and welcome! Thanks for calling this to our attention.

I would suggest referring to our Help docs, guides, and Sonatype Learn as this blog post is a bit outdated now. There is a call-out to this at the top of the page but it’s easy to miss and might need further clarity. I’ll look into this.

This might be a good place to start: Pushing Images

But there are a number of potentially helpful resources in the nav on the left-side of that page.

Let me know if that’s helpful at all.

Thank you Maura. Based on your link, I changed the docker-group and docker-private to use https instead of http. I also added entries to the repo-values.yaml for their registries. External Endpoints are displayed, but non-functional. Unfortunately, the forum is saying I (a new user) cannot have more than 2 links. My attempt to include the repo-values.yaml and docker login commands with their error messages would not work, however the error messages are the same.

The registries now have external endpoints via the load balancer. The Nexus webpage is visible. However the Internal Endpoints are shown as nexus-repo-nexus-repository-manager-docker-8082:8082 TCP with External Endpoint as 192.168.1.181:8082. I am guessing that the IP address for the External Endpoint is being assigned as a new DHCP assigned IP to the internal nexus-repo-nexus-repository-manager-docker-8082 host name, instead of using the solar-nexus hostname, or 192.168.1.145 IP address. Both docker login commands fail.

What might I be doing wrong?

I would suggest looking at why the login commands fail, I would imagine they have some error.

1 Like

Thank you Matthew. The forum wasn’t letting me post the error messages.

docker login -u admin -p admin123 192.168.1.145:8082
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get “https://192.168.1.145:8082/v2/”: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

docker login -u admin -p admin123 192.168.1.181:8082
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get “https://192.168.1.181:8082/v2/”: dial tcp 192.168.1.181:8082: connectex: No connection could be made because the target machine actively refused it.

That seemed to be a default setting that I have now changed. Thanks for bringing that to my attention!

Thank you Maura. The repo-values.yaml is currently:

ingress:
enabled: false
path: /
annotations: {
kubernetes.io/ingress.class: nginx,
cert-manager.io/cluster-issuer: letsencrypt
}
tls:
enabled: true
secretName: nexus-tls
nexus:
nexusPort: 8081
service:
enabled: true
type: LoadBalancer
loadBalancerSourceRanges:
- 192.168.1.145/32
loadBalancerIP: 192.168.1.145
env:
- name: NEXUS_SECURITY_RANDOMPASSWORD
value: “false”
dockerSupport:
enabled: true
docker:
enabled: true
registries:
- host: 192.168.1.145
port: 8082
secretName: registry-secret
- host: 192.168.1.145
port: 8083
secretName: registry-secret
nexusProxy:
env:
nexusHttpHost: solar-nexus.office.multideck.com
persistence:
storageSize: “200Gi”
service:
name: nexus3
enabled: true
type: LoadBalancer
annotations:
metallb.universe.tf/allow-shared-ip: “{{ ndo_context }}”
loadBalancerIP: 192.168.1.145

I suspect that a different hostname, other than solar-nexus, is being used for the docker-group and docker-private registry settings, which is then being assigned an IP using DHCP, and then used for the External Endpoint. Of course, there is no server listening at that address. My next step might be to start reading through the Nexus Repository Manager source code. I was hoping that someone had already bumped their head on this, as there is a Helm chart and this was a recommended setup (as per the blog).

I directly updated the yaml for the nexus-repo-nexus-repository-manager-docker-8082 service, adding:

 loadBalancerIP: 192.168.1.145
 loadBalancerSourceRanges:
    - 0.0.0.0/0

The External Endpoint now shows the correct IP address. The previous error message:

PS C:\Users\fbrier> docker login -u admin -p admin123 192.168.1.145:8082
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get "https://192.168.1.145:8082/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

With this change becomes:

PS C:\Users\fbrier> docker login -u admin -p admin123 192.168.1.145:8082
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get "https://192.168.1.145:8082/v2/": dial tcp 192.168.1.145:8082: connectex: No connection could be made because the target machine actively refused it.

I am guessing (not knowing Helm) that adding after line 51 in helm3-charts/charts/nexus-repository-manager/templates/service.yaml:

    {{- if $.Values.nexus.loadBalancerIP }}
		{{ toYaml $.Values.nexus.loadBalancerIP | indent 4 }}
		loadBalancerSourceRanges:
			- 0.0.0.0/0
	{{- end }}

Would generate the required service.yaml. Unfortunately, the above error is preventing login. The service.yaml has both a port and a nodePort attribute, which it probably should not.

spec:
  ports:
    - name: docker-8082
      protocol: TCP
      port: 8082
      targetPort: 8082
      nodePort: 32034

Please let me know if this is incorrect. I will try to delete nodePort and then port and see if it fixes it. Thank you.

The error messages suggest suggest that your load balancer isn’t listening on that port.

I suspect that the problem is in the generation of the service.yaml files for each Values.nexus.docker.registries entry. Port 8081 has the correct IP/clusterIP (10.152.183.145), but examining the service.yaml in the Kubernetes Dashboard, 8082 and 8083 have different clusterIP(s), 10.152.183.247 and 10.152.183.203 respectively. I have a dnsutils pod set up, which I tty’ed into. I then telnet’ed to 10.152.183.145:8081. That works. telnet’ing to 10.152.183.247:8082 fails. telnet’ing to 10.152.183.145:8082 works. This would indicate that the generated service.yaml from the template is incorrect and that the load balancer is attempting to connect to the wrong IP. The helm3-charts/charts/nexus-repository-manager/templates/service.yaml template does not have lines for the clusterIP attributes.

  clusterIP: 10.152.183.247
  clusterIPs:
    - 10.152.183.247

It must be generated implicitly. I will try and learn how those attributes are generated. Kubernetes won’t let me change them once they are set, in order to test them before going to the trouble of modifying the template.

Hi Frederick,

Here are a few suggestions to help narrow down the problem. If you haven’t already done so, please can you list the ingresses and the services using: kubectl get ingresses --all-namespaces and kubectl get services --all-namespaces ?

Then describe each service and associated ingress using kubectl describe svc <service name> and kubectl describe ingress <ingress name> respectively? The aim is to check that the ingress and associated service contain matching IP and ports.

For example, for each ingress the backend ip and port displayed when you do kubectl describe ingress <ingress name> should be the same as the ip and port for the corresponding service when you do kubectl describe svc <service name>

Also the endpoint ip and port displayed for each kubectl describe svc <service name> should be set