Outstanding questions in Sonatype blog about Docker images

A number of people have posted questions regarding the blog post below, and I have the same questions/problems.

Specifically:

  • Client.Timeout exceeded while awaiting headers
  • “disable-legacy-registry” is not attribute the daemon supports

Ultimately, I need to push a newly created Docker image to the Nexus Repository, however the login step in the blog post fails, as other people have noted. Perhaps there is a missing step that opens the port…? I installed the Docker image using a Helm chart into a MicroK8s node. Thank you.

Hi @fnbrier and welcome! Thanks for calling this to our attention.

I would suggest referring to our Help docs, guides, and Sonatype Learn as this blog post is a bit outdated now. There is a call-out to this at the top of the page but it’s easy to miss and might need further clarity. I’ll look into this.

This might be a good place to start: Pushing Images

But there are a number of potentially helpful resources in the nav on the left-side of that page.

Let me know if that’s helpful at all.

Thank you Maura. Based on your link, I changed the docker-group and docker-private to use https instead of http. I also added entries to the repo-values.yaml for their registries. External Endpoints are displayed, but non-functional. Unfortunately, the forum is saying I (a new user) cannot have more than 2 links. My attempt to include the repo-values.yaml and docker login commands with their error messages would not work, however the error messages are the same.

The registries now have external endpoints via the load balancer. The Nexus webpage is visible. However the Internal Endpoints are shown as nexus-repo-nexus-repository-manager-docker-8082:8082 TCP with External Endpoint as 192.168.1.181:8082. I am guessing that the IP address for the External Endpoint is being assigned as a new DHCP assigned IP to the internal nexus-repo-nexus-repository-manager-docker-8082 host name, instead of using the solar-nexus hostname, or 192.168.1.145 IP address. Both docker login commands fail.

What might I be doing wrong?

I would suggest looking at why the login commands fail, I would imagine they have some error.

1 Like

Thank you Matthew. The forum wasn’t letting me post the error messages.

docker login -u admin -p admin123 192.168.1.145:8082
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get “https://192.168.1.145:8082/v2/”: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

docker login -u admin -p admin123 192.168.1.181:8082
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get “https://192.168.1.181:8082/v2/”: dial tcp 192.168.1.181:8082: connectex: No connection could be made because the target machine actively refused it.

That seemed to be a default setting that I have now changed. Thanks for bringing that to my attention!

Thank you Maura. The repo-values.yaml is currently:

ingress:
enabled: false
path: /
annotations: {
kubernetes.io/ingress.class: nginx,
cert-manager.io/cluster-issuer: letsencrypt
}
tls:
enabled: true
secretName: nexus-tls
nexus:
nexusPort: 8081
service:
enabled: true
type: LoadBalancer
loadBalancerSourceRanges:
- 192.168.1.145/32
loadBalancerIP: 192.168.1.145
env:
- name: NEXUS_SECURITY_RANDOMPASSWORD
value: “false”
dockerSupport:
enabled: true
docker:
enabled: true
registries:
- host: 192.168.1.145
port: 8082
secretName: registry-secret
- host: 192.168.1.145
port: 8083
secretName: registry-secret
nexusProxy:
env:
nexusHttpHost: solar-nexus.office.multideck.com
persistence:
storageSize: “200Gi”
service:
name: nexus3
enabled: true
type: LoadBalancer
annotations:
metallb.universe.tf/allow-shared-ip: “{{ ndo_context }}”
loadBalancerIP: 192.168.1.145

I suspect that a different hostname, other than solar-nexus, is being used for the docker-group and docker-private registry settings, which is then being assigned an IP using DHCP, and then used for the External Endpoint. Of course, there is no server listening at that address. My next step might be to start reading through the Nexus Repository Manager source code. I was hoping that someone had already bumped their head on this, as there is a Helm chart and this was a recommended setup (as per the blog).

I directly updated the yaml for the nexus-repo-nexus-repository-manager-docker-8082 service, adding:

 loadBalancerIP: 192.168.1.145
 loadBalancerSourceRanges:
    - 0.0.0.0/0

The External Endpoint now shows the correct IP address. The previous error message:

PS C:\Users\fbrier> docker login -u admin -p admin123 192.168.1.145:8082
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get "https://192.168.1.145:8082/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

With this change becomes:

PS C:\Users\fbrier> docker login -u admin -p admin123 192.168.1.145:8082
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get "https://192.168.1.145:8082/v2/": dial tcp 192.168.1.145:8082: connectex: No connection could be made because the target machine actively refused it.

I am guessing (not knowing Helm) that adding after line 51 in helm3-charts/charts/nexus-repository-manager/templates/service.yaml:

    {{- if $.Values.nexus.loadBalancerIP }}
		{{ toYaml $.Values.nexus.loadBalancerIP | indent 4 }}
		loadBalancerSourceRanges:
			- 0.0.0.0/0
	{{- end }}

Would generate the required service.yaml. Unfortunately, the above error is preventing login. The service.yaml has both a port and a nodePort attribute, which it probably should not.

spec:
  ports:
    - name: docker-8082
      protocol: TCP
      port: 8082
      targetPort: 8082
      nodePort: 32034

Please let me know if this is incorrect. I will try to delete nodePort and then port and see if it fixes it. Thank you.

The error messages suggest suggest that your load balancer isn’t listening on that port.

I suspect that the problem is in the generation of the service.yaml files for each Values.nexus.docker.registries entry. Port 8081 has the correct IP/clusterIP (10.152.183.145), but examining the service.yaml in the Kubernetes Dashboard, 8082 and 8083 have different clusterIP(s), 10.152.183.247 and 10.152.183.203 respectively. I have a dnsutils pod set up, which I tty’ed into. I then telnet’ed to 10.152.183.145:8081. That works. telnet’ing to 10.152.183.247:8082 fails. telnet’ing to 10.152.183.145:8082 works. This would indicate that the generated service.yaml from the template is incorrect and that the load balancer is attempting to connect to the wrong IP. The helm3-charts/charts/nexus-repository-manager/templates/service.yaml template does not have lines for the clusterIP attributes.

  clusterIP: 10.152.183.247
  clusterIPs:
    - 10.152.183.247

It must be generated implicitly. I will try and learn how those attributes are generated. Kubernetes won’t let me change them once they are set, in order to test them before going to the trouble of modifying the template.

Hi Frederick,

Here are a few suggestions to help narrow down the problem. If you haven’t already done so, please can you list the ingresses and the services using: kubectl get ingresses --all-namespaces and kubectl get services --all-namespaces ?

Then describe each service and associated ingress using kubectl describe svc <service name> and kubectl describe ingress <ingress name> respectively? The aim is to check that the ingress and associated service contain matching IP and ports.

For example, for each ingress the backend ip and port displayed when you do kubectl describe ingress <ingress name> should be the same as the ip and port for the corresponding service when you do kubectl describe svc <service name>

Also the endpoint ip and port displayed for each kubectl describe svc <service name> should be set

@ Olu Shiyanbade
Below are the commands you requested I run. Perhaps I am misunderstanding the Nexus Repository architecture, but my current understanding is there is only one Nexus process. What the output of the commands seem to indicate, is that while the exposed ingress ports are all on the exposed 192.168.1.145, the 8082 and 8083 ports are mapped to different (bogus) internal cluster IPs. The 10.152.183.145 is the correct cluster IP for the Nexus process. It is possible to connect to that cluster IP on ports 8081, 8082, and 8083. However, it is not possible to attach to the cluster IP/ports: 10.152.183.247:8082 and 10.152.183.203:8083. My guess is that the Helm chart is creating bogus host names and that the internal DHCP is assigning IP addresses to server processes that don’t exist. While I experimented with changing the Nexus Helm chart templates and running the tests (which required setting up Docker Desktop and Minikube), I was not successful. Any assistance would be appreciated. I apologize that I missed your response; I was coding another software component. Thank you.

fbrier@solar:~$ kubectl get ingresses --all-namespaces
NAMESPACE   NAME            CLASS    HOSTS                                ADDRESS         PORTS     AGE
default     tomcat-static   <none>   solar-tomcat.office.multideck.com    192.168.1.180   80, 443   358d
default     gitea           <none>   solar-gitea.office.multideck.com     192.168.1.180   80, 443   375d
default     mediawiki       <none>   solar-wiki.office.multideck.com      192.168.1.180   80, 443   382d
default     redmine         <none>   solar-redmine.office.multideck.com   192.168.1.180   80, 443   382d
fbrier@solar:~$ kubectl get services --all-namespaces
NAMESPACE     NAME                                              TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                      AGE
default       kubernetes                                        ClusterIP      10.152.183.1     <none>          443/TCP                      403d
default       ingress-nginx-controller-admission                ClusterIP      10.152.183.199   <none>          443/TCP                      392d
default       cert-manager-webhook                              ClusterIP      10.152.183.12    <none>          443/TCP                      392d
default       cert-manager                                      ClusterIP      10.152.183.179   <none>          9402/TCP                     392d
kube-system   metrics-server                                    ClusterIP      10.152.183.84    <none>          443/TCP                      392d
kube-system   dashboard-metrics-scraper                         ClusterIP      10.152.183.103   <none>          8000/TCP                     392d
default       ingress-nginx-controller                          LoadBalancer   10.152.183.214   192.168.1.180   80:32626/TCP,443:32128/TCP   392d
ingress       ingress                                           LoadBalancer   10.152.183.235   192.168.1.120   80:31313/TCP,443:32636/TCP   392d
openebs       openebs-apiservice                                ClusterIP      10.152.183.96    <none>          5656/TCP                     392d
openebs       admission-server-svc                              ClusterIP      10.152.183.24    <none>          443/TCP                      392d
kube-system   kubernetes-dashboard                              LoadBalancer   10.152.183.25    192.168.1.142   443:30987/TCP                392d
default       redmine-postgresql-headless                       ClusterIP      None             <none>          5432/TCP                     382d
default       redmine-postgresql                                ClusterIP      10.152.183.128   <none>          5432/TCP                     382d
default       redmine                                           LoadBalancer   10.152.183.74    192.168.1.141   80:31559/TCP                 382d
default       mediawiki-mariadb                                 ClusterIP      10.152.183.218   <none>          3306/TCP                     382d
default       mediawiki                                         LoadBalancer   10.152.183.78    192.168.1.143   80:30995/TCP                 382d
kube-system   kube-dns                                          ClusterIP      10.152.183.10    <none>          53/UDP,53/TCP,9153/TCP       375d
default       gitea-postgresql-headless                         ClusterIP      None             <none>          5432/TCP                     375d
default       gitea-postgresql                                  ClusterIP      10.152.183.126   <none>          5432/TCP                     375d
default       gitea-ssh                                         LoadBalancer   10.152.183.186   192.168.1.144   22:30267/TCP                 375d
default       gitea-memcached                                   ClusterIP      10.152.183.30    <none>          11211/TCP                    375d
default       gitea-http                                        LoadBalancer   10.152.183.72    192.168.1.144   8080:30130/TCP               375d
default       tomcat-static                                     LoadBalancer   10.152.183.202   192.168.1.147   80:31511/TCP                 358d
default       jenkins-agent                                     ClusterIP      10.152.183.134   <none>          50000/TCP                    375d
default       jenkins                                           LoadBalancer   10.152.183.112   192.168.1.146   8080:31347/TCP               375d
default       nexus-repo-nexus-repository-manager               LoadBalancer   10.152.183.145   192.168.1.145   8081:31018/TCP               376d
default       nexus-repo-nexus-repository-manager-docker-8082   LoadBalancer   10.152.183.247   192.168.1.145   8082:32034/TCP               24d
default       nexus-repo-nexus-repository-manager-docker-8083   LoadBalancer   10.152.183.203   192.168.1.145   8083:30461/TCP               24d
fbrier@solar:~$ kubectl describe svc nexus-repo-nexus-repository-manager
Name:                     nexus-repo-nexus-repository-manager
Namespace:                default
Labels:                   app.kubernetes.io/instance=nexus-repo
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=nexus-repository-manager
                          app.kubernetes.io/version=3.40.0
                          helm.sh/chart=nexus-repository-manager-40.0.0
Annotations:              meta.helm.sh/release-name: nexus-repo
                          meta.helm.sh/release-namespace: default
                          metallb.universe.tf/allow-shared-ip: {{ ndo_context }}
Selector:                 app.kubernetes.io/instance=nexus-repo,app.kubernetes.io/name=nexus-repository-manager
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.152.183.145
IPs:                      10.152.183.145
IP:                       192.168.1.145
LoadBalancer Ingress:     192.168.1.145
Port:                     nexus-ui  8081/TCP
TargetPort:               8081/TCP
NodePort:                 nexus-ui  31018/TCP
Endpoints:                10.1.140.91:8081
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason        Age                     From             Message
  ----    ------        ----                    ----             -------
  Normal  nodeAssigned  6m12s (x4668 over 24d)  metallb-speaker  announcing from node "solar"
fbrier@solar:~$ kubectl describe svc nexus-repo-nexus-repository-manager-docker-8082
Name:                        nexus-repo-nexus-repository-manager-docker-8082
Namespace:                   default
Labels:                      app.kubernetes.io/instance=nexus-repo
                             app.kubernetes.io/managed-by=Helm
                             app.kubernetes.io/name=nexus-repository-manager
                             app.kubernetes.io/version=3.40.0
                             helm.sh/chart=nexus-repository-manager-40.0.0
Annotations:                 meta.helm.sh/release-name: nexus-repo
                             meta.helm.sh/release-namespace: default
                             metallb.universe.tf/allow-shared-ip: {{ ndo_context }}
Selector:                    app.kubernetes.io/instance=nexus-repo,app.kubernetes.io/name=nexus-repository-manager
Type:                        LoadBalancer
IP Family Policy:            SingleStack
IP Families:                 IPv4
IP:                          10.152.183.247
IPs:                         10.152.183.247
IP:                          192.168.1.145
LoadBalancer Ingress:        192.168.1.145
Port:                        docker-8082  8082/TCP
TargetPort:                  8082/TCP
NodePort:                    docker-8082  32034/TCP
Endpoints:                   10.1.140.91:8082
Session Affinity:            None
External Traffic Policy:     Cluster
LoadBalancer Source Ranges:  0.0.0.0/0
Events:
  Type    Reason        Age                     From             Message
  ----    ------        ----                    ----             -------
  Normal  nodeAssigned  2m18s (x4671 over 24d)  metallb-speaker  announcing from node "solar"
fbrier@solar:~$ kubectl describe svc nexus-repo-nexus-repository-manager-docker-8083
Name:                        nexus-repo-nexus-repository-manager-docker-8083
Namespace:                   default
Labels:                      app.kubernetes.io/instance=nexus-repo
                             app.kubernetes.io/managed-by=Helm
                             app.kubernetes.io/name=nexus-repository-manager
                             app.kubernetes.io/version=3.40.0
                             helm.sh/chart=nexus-repository-manager-40.0.0
Annotations:                 meta.helm.sh/release-name: nexus-repo
                             meta.helm.sh/release-namespace: default
                             metallb.universe.tf/allow-shared-ip: {{ ndo_context }}
Selector:                    app.kubernetes.io/instance=nexus-repo,app.kubernetes.io/name=nexus-repository-manager
Type:                        LoadBalancer
IP Family Policy:            SingleStack
IP Families:                 IPv4
IP:                          10.152.183.203
IPs:                         10.152.183.203
IP:                          192.168.1.145
LoadBalancer Ingress:        192.168.1.145
Port:                        docker-8083  8083/TCP
TargetPort:                  8083/TCP
NodePort:                    docker-8083  30461/TCP
Endpoints:                   10.1.140.91:8083
Session Affinity:            None
External Traffic Policy:     Cluster
LoadBalancer Source Ranges:  0.0.0.0/0
Events:
  Type    Reason        Age                     From             Message
  ----    ------        ----                    ----             -------
  Normal  nodeAssigned  2m50s (x4671 over 24d)  metallb-speaker  announcing from node "solar"

@fnbrier Continuing your question from the other thread - pricing is dependent on the solution you’re looking to purchase. Let me know which you’re interested in and I can connect you with the right person. I will also see if I can get a follow up on your question above. Thank you!