Failed Docker login running Nexus Repository Manager on Kubernetes behind nginx-ingress

Hello,

I’m trying to login to my docker private repository and I’m getting different errors because I’m trying different things but I can’t get it. For example:

docker login --log-level=debug -u admin -p admin123 nexus.k3s.hybrid-services.ml
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
INFO[0000] /usr/bin/podman filtering at log level debug
DEBU[0000] Called login.PersistentPreRunE(/usr/bin/podman login --log-level=debug -u admin -p admin123 nexus.k3s.hybrid-services.ml)
DEBU[0000] Merged system config "/usr/share/containers/containers.conf"
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/jquesada/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/jquesada/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1002/containers
DEBU[0000] Using static dir /home/jquesada/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1002/libpod/tmp
DEBU[0000] Using volume path /home/jquesada/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is not being used
DEBU[0000] Cached value indicated that native-diff is usable
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
DEBU[0000] Initializing event backend file
DEBU[0000] Configured OCI runtime crun initialization failed: no valid executable found for OCI runtime crun: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/runc"
INFO[0000] Setting parallel job count to 13
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/000-shortnames.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/001-rhel-shortnames.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/002-rhel-shortnames-overrides.conf"
DEBU[0000] No credentials matching nexus.k3s.hybrid-services.ml found in /run/user/1002/containers/auth.json
DEBU[0000] No credentials matching nexus.k3s.hybrid-services.ml found in /home/jquesada/.config/containers/auth.json
DEBU[0000] No credentials matching nexus.k3s.hybrid-services.ml found in /home/jquesada/.docker/config.json
DEBU[0000] No credentials matching nexus.k3s.hybrid-services.ml found in /home/jquesada/.dockercfg
DEBU[0000] No credentials for nexus.k3s.hybrid-services.ml found
DEBU[0000] Looking for TLS certificates and private keys in /etc/docker/certs.d/nexus.k3s.hybrid-services.ml
DEBU[0000]  crt: /etc/docker/certs.d/nexus.k3s.hybrid-services.ml/nexus.crt
DEBU[0000] GET https://nexus.k3s.hybrid-services.ml/v2/
DEBU[0000] Ping https://nexus.k3s.hybrid-services.ml/v2/ status 404
DEBU[0000] GET https://nexus.k3s.hybrid-services.ml/v1/_ping
DEBU[0000] Ping https://nexus.k3s.hybrid-services.ml/v1/_ping status 404
Error: authenticating creds for "nexus.k3s.hybrid-services.ml": pinging container registry nexus.k3s.hybrid-services.ml: invalid status code from registry 404 (Not Found)

I’m working on Kubernetes and I have an nginx set up. Both Nexus and Nginx have been installed on this Kubernetes cluster which has 3 worker nodes and the nginx is currently acting as a load balancer.

I have no problem accessing Nexus directly, the problem comes when I try to login to the Docker repository I have created inside Nexus.

I’m working with protocol https “https://nexus.k3s.hybrid-services.ml/” and I access to that URL with the admin user without issues.

Here is my configuration information:

Nexus ingress configuration:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nexus-ingress-docker
  namespace: nexus-rm
  annotations:
    #kubernetes.io/ingress.class: nginx
    nginx.org/client-max-body-size: "1G"
    nginx.org/proxy-buffering: "off"
    nginx.org/server-snippets: |
      location ~ ^/(v1|v2)/[^/]+/?[^/]+/blobs/ {
        if ($request_method ~* (POST|PUT|DELETE|PATCH|HEAD) ) {
            rewrite ^/(.*)$ /repository/docker-private/$1 last;
        }
        rewrite ^/(.*)$ /repository/docker-public/$1 last;
      }

      location ~ ^/(v1|v2)/ {
        if ($request_method ~* (POST|PUT|DELETE|PATCH) ) {
            rewrite ^/(.*)$ /repository/docker-private/$1 last;
        }
        rewrite ^/(.*)$ /repository/docker-public/$1 last;
      }
    nginx.org/location-snippets: |
      proxy_set_header X-Forwarded-Proto https;
spec:
  tls:
  - hosts:
    - nexus.k3s.hybrid-services.ml
    secretName: nexus-lab-io-tls
  ingressClassName: nginx
  rules:
    - host: nexus.k3s.hybrid-services.ml
      http:
        paths:
          - backend:
              service:
                name: nexus-rm-nexus-repository-manager
                port:
                  number: 8081
            path: /
            pathType: Prefix

Yaml file of Nexus service:

apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: nexus-rm
    meta.helm.sh/release-namespace: nexus-rm
  creationTimestamp: "2023-01-17T13:45:50Z"
  labels:
    app.kubernetes.io/instance: nexus-rm
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nexus-repository-manager
    app.kubernetes.io/version: 3.45.0
    helm.sh/chart: nexus-repository-manager-45.0.0
    velero.io/backup-name: backup-diario-3am-con-pv-20230117030015
    velero.io/restore-name: nexus-restore-20220117
  name: nexus-rm-nexus-repository-manager
  namespace: nexus-rm
  resourceVersion: "172380023"
  uid: 4e5b0187-378e-4554-a651-acc62c0340e7
spec:
  clusterIP: 10.43.127.242
  clusterIPs:
  - 10.43.127.242
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: nexus-ui
    port: 8081
    protocol: TCP
    targetPort: 8081
  - name: docker-private
    port: 18080
    protocol: TCP
    targetPort: 18080
  - name: docker-public
    port: 18081
    protocol: TCP
    targetPort: 18081
  selector:
    app.kubernetes.io/instance: nexus-rm
    app.kubernetes.io/name: nexus-repository-manager
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

I’ve tried everything but I can’t make progress and find the solution.

Thanks in advance,

Jorge.

Hi @q.l.jorge , have you figured this out? I got it working fine on my side, let me know if I can help with that (I also had to spent some time making it work).