Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman Network not being able to reach Host (Outbound Connectivity) #25093

Open
luckylinux opened this issue Jan 22, 2025 · 9 comments
Open

Podman Network not being able to reach Host (Outbound Connectivity) #25093

luckylinux opened this issue Jan 22, 2025 · 9 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. network Networking related issue or feature

Comments

@luckylinux
Copy link

luckylinux commented Jan 22, 2025

Issue Description

In the Last few Days (at least), I experienced some Connectivity Issues in my Docker Local Mirror, which essentially consists of the Following Containers:

  • docker.io/registry:latest
  • docker.io/cesanta/docker_auth:latest

In the traefik Reverse Proxy Logs I can see the following Message:

Error calling https://docker-auth.MYDOMAIN.TLD/auth. Cause: Get "https://docker-auth.MYDOMAIN.TLD/auth": dial tcp 192.168.8.15:443: connect: network is unreachable middlewareName=docker-local-mirror-registry-forwardauth@docker middlewareType=ForwardAuth

And indeed I CANNOT ping/curl/etc the Host IP (or the Host upstream Gateway 192.168.1.1 for that Matter) from within the traefik Container or any of the registry or docker_auth Containers connected to the traefik Network.

See "Additional information" for the compose.yml Files.

Steps to reproduce the issue

Unsure since it occurred after a long period of working correctly 😞.

Describe the results you received

HTTP Page does NOT display at all ("white") in Firefox.

traefik Logs show:

2025-01-22T18:15:00Z DBG github.com/traefik/traefik/v3/pkg/middlewares/auth/forward.go:152 > Error calling https://docker-auth.MYDOMAIN.TLD/auth. Cause: Get "https://docker-auth.MYDOMAIN.TLD/auth": dial tcp 192.168.8.15:443: connect: network is unreachable middlewareName=docker-local-mirror-registry-forwardauth@docker middlewareType=ForwardAuth
2025-01-22T18:15:00Z DBG github.com/traefik/traefik/v3/pkg/middlewares/auth/forward.go:152 > Error calling https://docker-auth.MYDOMAIN.TLD/auth. Cause: Get "https://docker-auth.MYDOMAIN.TLD/auth": dial tcp 192.168.8.15:443: connect: network is unreachable middlewareName=docker-local-mirror-registry-forwardauth@docker middlewareType=ForwardAuth

Trying to access the Service from within a Container running in the podman Network:

# docker-sync-registries is in the <podman> Network

podman@HOST:~$ podman exec -it docker-sync-registries /bin/bash
root@70ae6d6c3136:/opt/app# curl -L -v https://docker.MYDOMAIN.TLD/v2/_catalog/
* Host docker.MYDOMAIN.TLD:443 was resolved.
* IPv6: (none)
* IPv4: 192.168.8.15
*   Trying 192.168.8.15:443...
* GnuTLS ciphers: NORMAL:-ARCFOUR-128:-CTYPE-ALL:+CTYPE-X509:-VERS-SSL3.0
* ALPN: curl offers h2,http/1.1
* found 146 certificates in /etc/ssl/certs/ca-certificates.crt
* found 438 certificates in /etc/ssl/certs
* SSL connection using TLS1.3 / ECDHE_RSA_AES_128_GCM_SHA256
*   server certificate verification OK
*   server certificate status verification SKIPPED
*   common name: MYDOMAIN.TLD (matched)
*   server certificate expiration date OK
*   server certificate activation date OK
*   certificate public key: EC/ECDSA
*   certificate version: #3
*   subject: CN=MYDOMAIN.TLD
*   start date: Mon, 30 Dec 2024 09:04:08 GMT
*   expire date: Sun, 30 Mar 2025 09:04:07 GMT
*   issuer: C=US,O=Let's Encrypt,CN=E6
* ALPN: server accepted h2
* Connected to docker.MYDOMAIN.TLD (192.168.8.15) port 443
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://docker.MYDOMAIN.TLD/v2/_catalog/
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: docker.MYDOMAIN.TLD]
* [HTTP/2] [1] [:path: /v2/_catalog/]
* [HTTP/2] [1] [user-agent: curl/8.11.0]
* [HTTP/2] [1] [accept: */*]
> GET /v2/_catalog/ HTTP/2
> Host: docker.MYDOMAIN.TLD
> User-Agent: curl/8.11.0
> Accept: */*
> 
* Request completely sent off
< HTTP/2 500 
< content-length: 0
< date: Wed, 22 Jan 2025 18:20:30 GMT
< 
* Connection #0 to host docker.MYDOMAIN.TLD left intact

root@70ae6d6c3136:/opt/app# cat /etc/hosts
127.0.0.1	localhost localhost.localdomain localhost4 localhost4.localdomain4
::1	localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.8.15	localhost localhost.localdomain localhost4 localhost4.localdomain4
127.0.1.1	podmanserver15.MYDOMAIN.TLD podmanserver15
ff02:1	ip6-allnodes
ff02:2	ip6-allrouters
192.168.8.15	host.containers.internal host.docker.internal
10.0.2.100	70ae6d6c3136 docker-sync-registries

root@70ae6d6c3136:/opt/app# cat /etc/resolv.conf 
search MYDOMAIN.TLD
nameserver 10.0.2.3
nameserver 2XX:XXXX:XXXX:1::1:3
nameserver 2XX:XXXX:XXXX:1::1:4
nameserver 2XX:XXXX:XXXX:1::1:5
nameserver 192.168.1.3
nameserver 192.168.1.4
nameserver 192.168.1.5
nameserver 2XX:XXXX:XXXX:1::1:3
nameserver 2XX:XXXX:XXXX:1::1:4

root@70ae6d6c3136:/opt/app# ip route
default via 10.0.2.2 dev tap0 
10.0.2.0/24 dev tap0 proto kernel scope link src 10.0.2.100 

The same Result for curl can be obtained by executing the same Command directly on the Host.

I cannot even run apk add curl from either traefik nor docker-local-mirror-registry Container, so I'd say ALL containers on the traefik Network are affected.

Container(s?) on the podman Network seem to be fine with regards to Outbound Connectivity and I can (getting an error because the converter doesn't reply) curl the docker mirror

Packet Capture OUTBOUND: docker-local-registry-mirror Container -> apk update Servers attached.

Packet_Capture_OUTBOUND_from_docker-local-registry-mirror_Container_to_apk_update_Servers.txt

Packet Capture Inbound: Desktop Firefox -> traefik Container attached.

Packet_Capture_INBOUND_from_Desktop_Firefox_to_traefik_Container.txt

Describe the results you expected

Traefik simply forwards the Connection to the docker-auth Container for validating Credentials.

ping / curl / etc of 192.168.8.15 IP Address (HOST IP Address) should have been successfull.

podman info output

podman@HOST:~$ podman info
host:
  arch: amd64
  buildahVersion: 1.38.0
  cgroupControllers:
  - cpu
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.12-3.fc41.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.12, commit: '
  cpuUtilization:
    idlePercent: 99.06
    systemPercent: 0.49
    userPercent: 0.45
  cpus: 8
  databaseBackend: sqlite
  distribution:
    distribution: fedora
    variant: server
    version: "41"
  eventLogger: journald
  freeLocks: 2000
  hostname: HOST.MYDOMAIN.TLD
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1002
      size: 1
    - container_id: 1
      host_id: 655360
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1002
      size: 1
    - container_id: 1
      host_id: 655360
      size: 65536
  kernel: 6.9.5-200.fc40.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 5542244352
  memTotal: 8171139072
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.13.1-1.fc41.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.13.1
    package: netavark-1.13.1-1.fc41.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.13.1
  ociRuntime:
    name: crun
    package: crun-1.19.1-1.fc41.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.19.1
      commit: 3e32a70c93f5aa5fea69b50256cca7fd4aa23c80
      rundir: /run/user/1002/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20241211.g09478d5-1.fc41.x86_64
    version: |
      pasta 0^20241211.g09478d5-1.fc41.x86_64
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1002/podman/podman.sock
  rootlessNetworkCmd: pasta
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.3.1-1.fc41.x86_64
    version: |-
      slirp4netns version 1.3.1
      commit: e5e368c4f5db6ae75c2fce786e31eef9da6bf236
      libslirp: 4.8.0
      SLIRP_CONFIG_VERSION_MAX: 5
      libseccomp: 2.5.5
  swapFree: 10733215744
  swapTotal: 10733215744
  uptime: 11h 26m 6.00s (Approximately 0.46 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  localhost:5000:
    Blocked: false
    Insecure: true
    Location: localhost:5000
    MirrorByDigestOnly: false
    Mirrors: null
    Prefix: localhost:5000
    PullFromMirror: ""
  search:
  - docker.MYDOMAIN.TLD
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /home/podman/.config/containers/storage.conf
  containerStore:
    number: 6
    paused: 0
    running: 5
    stopped: 1
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.13-2.fc41.x86_64
      Version: |-
        fusermount3 version: 3.16.2
        fuse-overlayfs: version 1.13-dev
        FUSE library version 3.16.2
        using FUSE kernel interface version 7.38
    overlay.mountopt: nodev
  graphRoot: /data/PODMAN/STORAGE
  graphRootAllocated: 539448795136
  graphRootUsed: 9659068416
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Supports shifting: "true"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /home/podman/containers/tmp
  imageStore:
    number: 188
  runRoot: /run/user/1002
  transientStore: false
  volumePath: /data/PODMAN/VOLUMES
version:
  APIVersion: 5.3.1
  Built: 1732147200
  BuiltTime: Thu Nov 21 01:00:00 2024
  GitCommit: ""
  GoVersion: go1.23.3
  Os: linux
  OsArch: linux/amd64
  Version: 5.3.1

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

Podman Networks on this Host

podman@HOST:~$ podman network ls
NETWORK ID    NAME        DRIVER
2f259bab93aa  podman      bridge
a6d68cbc095b  traefik     bridge

Details of podman Network:

podman@podmanserver15:~$ podman network inspect podman
[
     {
          "name": "podman",
          "id": "2f259bab93aaaaa2542ba43ef33eb990d0999ee1b9924b557b7be53c0b7a1bb9",
          "driver": "bridge",
          "network_interface": "podman0",
          "created": "2025-01-22T19:12:40.137714633+01:00",
          "subnets": [
               {
                    "subnet": "10.88.0.0/16",
                    "gateway": "10.88.0.1"
               }
          ],
          "ipv6_enabled": false,
          "internal": false,
          "dns_enabled": false,
          "ipam_options": {
               "driver": "host-local"
          },
          "containers": {}
     }
]

Details of traefik Network:

[
     {
          "name": "traefik",
          "id": "a6d68cbc095bcc6234b02ad5915d29427576e0b414267cc14a184f6a1c93dbf1",
          "driver": "bridge",
          "network_interface": "podman1",
          "created": "2024-08-02T20:41:39.670582333+02:00",
          "subnets": [
               {
                    "subnet": "10.89.0.0/24",
                    "gateway": "10.89.0.1"
               }
          ],
          "ipv6_enabled": false,
          "internal": false,
          "dns_enabled": true,
          "ipam_options": {
               "driver": "host-local"
          },
          "containers": {
               "33d38cf2f1b16edca275fafc90f1b96fd1e3171e0569079966e8ac2806c9fee6": {
                    "name": "docker-local-mirror-registry",
                    "interfaces": {
                         "eth0": {
                              "subnets": [
                                   {
                                        "ipnet": "10.89.0.9/24",
                                        "gateway": "10.89.0.1"
                                   }
                              ],
                              "mac_address": "e6:7d:af:f2:ee:f2"
                         }
                    }
               },
               "4d4f4fc79548eb3893320bc6dd578b2a9bbb5e173a0c05909196e68560436033": {
                    "name": "traefik",
                    "interfaces": {
                         "eth0": {
                              "subnets": [
                                   {
                                        "ipnet": "10.89.0.11/24",
                                        "gateway": "10.89.0.1"
                                   }
                              ],
                              "mac_address": "ee:18:ed:56:c6:a9"
                         }
                    }
               },
               "f24cb1f788638be820a7eea59d8415a474c4ede704eac9892d9301a4834f509f": {
                    "name": "docker-local-mirror-auth",
                    "interfaces": {
                         "eth0": {
                              "subnets": [
                                   {
                                        "ipnet": "10.89.0.8/24",
                                        "gateway": "10.89.0.1"
                                   }
                              ],
                              "mac_address": "ce:d0:a0:c1:b8:10"
                         }
                    }
               }
          }
     }
]

Additional information

Local Docker Mirror compose.yml File:

version: "3.8"

services:
  docker-local-mirror-registry:
    image: docker.io/registry:latest
    pull_policy: "missing"
    container_name: docker-local-mirror-registry
    restart: "unless-stopped"
    volumes:
      - ~/containers/data/docker-local-mirror-registry:/var/lib/registry:rw,z
      - ~/containers/certificates/docker-local-mirror-auth/cert.pem:/cert/auth/cert.pem:ro,z
      - ~/containers/config/docker-local-mirror-registry/registry:/etc/docker/registry:ro,z
    networks:
      - traefik
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.docker-local-mirror-registry-router.rule=Host(`docker.MYDOMAIN.TLD`) || Host(`docker-local.MYDOMAIN.TLD`) || Host(`docker-local-mirror.MYDOMAIN.TLD`) || Host(`docker-local-mirror-registry.MYDOMAIN.TLD`) || Host(`docker-images.MYDOMAIN.TLD`) || Host(`docker-mirror.MYDOMAIN.TLD`)"

      # Headers Middleware
      - "traefik.http.routers.docker-local-mirror-registry-router.middlewares=docker-local-mirror-registry-headers,docker-local-mirror-registry-forwardauth"
      - "traefik.http.middlewares.docker-local-mirror-registry-headers.headers.customrequestheaders.Connection=Upgrade"

      - "traefik.http.middlewares.docker-local-mirror-registry-forwardauth.forwardauth.address=https://docker-auth.MYDOMAIN.TLD/auth"
      - "traefik.http.middlewares.docker-local-mirror-registry-forwardauth.forwardauth.trustforwardheader=true"
      - "traefik.http.middlewares.docker-local-mirror-registry-forwardauth.forwardauth.authresponseheaders=X-Forwarded-User"

      # Setup Service
      - "traefik.http.services.docker-local-mirror-registry-service.loadbalancer.server.port=5000"
      - "traefik.docker.network=traefik"
    environment:
      # Direct Connection
      - "REGISTRY_HTTP_ADDR=0.0.0.0:5000"
      # Use Traefik SSL Connection on Port 443
      #- "REGISTRY_HTTP_ADDR=0.0.0.0:443"
      - "REGISTRY_LOG_LEVEL=error"
      - "REGISTRY_STORAGE_DELETE_ENABLED=false"
      - "REGISTRY_STORAGE_DELETE_AGE=1344"
      - "REGISTRY_HTTP_SECRET=JSiG8jFsybtYwQUidcQHZghxFWd7zZ4CiKJRuVCy4AxgSszdqqE5qLoBnSYv3VpA"

  docker-local-mirror-auth:
    image: docker.io/cesanta/docker_auth:latest
    pull_policy: "missing"
    container_name: docker-local-mirror-auth
    volumes:
      - ~/containers/log/docker-local-mirror-auth:/logs:rw,z
      - ~/containers/config/docker-local-mirror-auth:/config:ro,z
      - ~/containers/certificates/docker-local-mirror-auth:/cert/auth:ro,z
    restart: "unless-stopped"
    command: --v=2 --alsologtostderr /config/config.yml
    networks:
      - traefik
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.docker-local-mirror-auth-router.rule=Host(`docker-auth.MYDOMAIN.TLD`)"

      # Headers Middleware
      - "traefik.http.routers.docker-local-mirror-auth-router.middlewares=docker-local-mirror-auth-headers"
      - "traefik.http.middlewares.docker-local-mirror-auth-headers.headers.customrequestheaders.Connection=Upgrade"

      # Setup Service
      - "traefik.http.services.docker-local-mirror-auth-service.loadbalancer.server.port=5001"

# Container Networks
networks:
  traefik:
    external: true

Entrypoint:

  • docker.io/traefik:v3.2

Its compose.yml File:

version: '3.9'

services:
  traefik:
    image: traefik:v3.2
    pull_policy: "missing"
    security_opt:
      - no-new-privileges:true
      - label=type:container_runtime_t
    restart: unless-stopped
    container_name: traefik
    ports:
      - target: 80
        host_ip: 192.168.8.15
        published: 80
        protocol: tcp
      - target: 443
        host_ip: 192.168.8.15
        published: 443
        protocol: tcp
      - target: 443
        host_ip: 192.168.8.15
        published: 443
        protocol: udp
    networks:
      - traefik
    volumes:
      - /run/user/1002/podman/podman.sock:/var/run/docker.sock:ro,z
      - ~/containers/config/traefik/dynamic:/etc/traefik/dynamic:ro,z
      - ~/containers/certificates/letsencrypt/MYDOMAIN.TLD:/certificates/MYDOMAIN.TLD:ro,z
      - ~/containers/log/traefik:/log:rw,z
    command:
      ## Logging
      # Server Log
      - "--log.level=DEBUG"
      - "--log.filePath=/log/server/traefik.log"

      # Access Log
      - "--accesslog=true"
      - "--accesslog.filePath=/log/access/access.log"

      ## Dashboard & API
      - "--api"
      - "--api.insecure=false" # production = false , development = true
      - "--api.dashboard=true"

      ## EntryPoints
      # Unsecure Connection - Redirect to Secure
      - "--entrypoints.web.address=:80"
      - "--entrypoints.web.http.redirections.entrypoint.to=websecure"
      - "--entryPoints.web.http.redirections.entrypoint.scheme=https"
      - "--entrypoints.web.http.redirections.entrypoint.permanent=true"

      # Secure Connection
      - "--entrypoints.websecure.address=:443"
      - "--entrypoints.websecure.http.tls=true"

      # Trafik v2
      - "--entryPoints.websecure.transport.respondingTimeouts.readTimeout=420"
      - "--entryPoints.websecure.transport.respondingTimeouts.writeTimeout=420"
      - "--entryPoints.websecure.transport.respondingTimeouts.idleTimeout=420"

      ## Docker / Podman Intergration
      - "--providers.docker=true"
      - "--providers.docker.exposedByDefault=false"
      - "--providers.docker.watch=true"
      - "--providers.docker.endpoint=unix:///var/run/docker.sock"

      # Use Dynamic Configuration
      - "--providers.file=true"
      - "--providers.file.directory=/etc/traefik/dynamic"

      ## Other
      # ...
      - "--serversTransport.insecureSkipVerify=true"

      # No Telemetry
      - "--global.sendAnonymousUsage=false"

    labels:
      # Enable Traefik
      - "traefik.enable=true"

      # Dashboard
      - "traefik.http.routers.dashboard.rule=Host(`HOST.MYDOMAIN.TLD`) && (PathPrefix(`/api`) || PathPrefix(`/dashboard`))"
      - "traefik.http.routers.dashboard.service=api@internal"

# Container Networks
networks:
  traefik:
    external: true

@luckylinux luckylinux added the kind/bug Categorizes issue or PR as related to a bug. label Jan 22, 2025
@luckylinux
Copy link
Author

luckylinux commented Jan 22, 2025

Outbound Connectivity

Host View (tcpdump -s0 -w capture_host_docker-local-mirror-registry_outbound_traffic.pcap):

Image

Image

Container View (podman unshare nsenter -n$(podman inspect --format '{{.NetworkSettings.SandboxKey}}' docker-local-mirror-registry) tcpdump -s0 -w capture_docker-local-mirror-registry_outbound_traffic.pcap):

Image

Inbound Connectivity

Host View (tcpdump -s0 -w capture_host_traefik_inbound_traffic.pcap):

Image

Image

Container View (podman unshare nsenter -n$(podman inspect --format '{{.NetworkSettings.SandboxKey}}' traefik) tcpdump -s0 -w capture_traefik_inbound_traffic.pcap):

Image

Hypothesis ?

I'd say Routing it's completely broken ...

@luckylinux
Copy link
Author

Outbound Connectivity in PCAP Format Filtered (TCP 443)

Host View (tcpdump -s0 -w capture_host_docker-local-mirror-registry_outbound_traffic.pcap):

capture_host_docker-local-mirror-registry_outbound_traffic.zip

Container View (podman unshare nsenter -n$(podman inspect --format '{{.NetworkSettings.SandboxKey}}' docker-local-mirror-registry) tcpdump -s0 -w capture_docker-local-mirror-registry_outbound_traffic.pcap):

capture_docker-local-mirror-registry_outbound_traffic_port_443.zip

Inbound Connectivity in PCAP Format Filtered (TCP 443)

Host View (tcpdump -s0 -w capture_host_traefik_inbound_traffic.pcap):

capture_host_traefik_inbound_traffic.zip

Container View (podman unshare nsenter -n$(podman inspect --format '{{.NetworkSettings.SandboxKey}}' traefik) tcpdump -s0 -w capture_traefik_inbound_traffic.pcap):

capture_traefik_inbound_traffic_port_443.zip

@sbrivio-rh sbrivio-rh added the network Networking related issue or feature label Jan 22, 2025
@sbrivio-rh
Copy link
Collaborator

Now you can upgrade to passt-0^20250121.g4f2c8e7-2.fc41 by the way, but I don't think it's going to fix your issue.

@Luap99
Copy link
Member

Luap99 commented Jan 23, 2025

Did you try to reboot? My the rootless netns somehow is a broken/weird state?
It is possible that the rootless netns pasta process died or the namespace was removed...
Then a newly started container should recreate the namespace but all old container will still be attached to the old borken namespace. A container restart might fix this but a reboot would be better to ensure a clean state.

podman unshare --rootless-netns ip a and podman unshare --rootless-netns ip route could help to see how the routing is there. But the command will create a new rootless-netns namespace if the old one was removed and it might restart pasta.

So if continue to have problems check if the pasta process is even running and if there are any relevant logs in the journal

root@70ae6d6c3136:/opt/app# ip route
default via 10.0.2.2 dev tap0
10.0.2.0/24 dev tap0 proto kernel scope link src 10.0.2.100

That does not look like like the routing setup from a container on the podman network. This looks like the default slirp4netns setup (or possible pasta if you used custom options to make it use the slirp4netns addresses)

@luckylinux
Copy link
Author

@Luap99: As I said, previously reboot worked (a few Days ago I could "solve" it like this).

But this last Time (Yesterday basically) I rebooted like 5 Times and nothing changed unfortunately 😞

@luckylinux
Copy link
Author

Container Addresses

podman@HOST:~$ podman unshare --rootless-netns ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host proto kernel_lo 
       valid_lft forever preferred_lft forever
2: ens19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65520 qdisc fq_codel state UNKNOWN group default qlen 1000
    link/ether 06:ac:5a:bc:32:4e brd ff:ff:ff:ff:ff:ff
    inet 172.30.1.2/24 brd 172.30.1.255 scope global noprefixroute ens19
       valid_lft forever preferred_lft forever
    inet6 2XXX:XXXX:XXXX:1::8:15/64 scope global nodad noprefixroute 
       valid_lft forever preferred_lft forever
    inet6 2XXX:XXXX:XXXX:1:c0be:cfe8:f3ca:aaee/64 scope global nodad mngtmpaddr noprefixroute 
       valid_lft forever preferred_lft forever
    inet6 fe80::4ac:5aff:febc:324e/64 scope link nodad proto kernel_ll 
       valid_lft forever preferred_lft forever
3: podman1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether c6:58:40:53:2e:a0 brd ff:ff:ff:ff:ff:ff
    inet 10.89.0.1/24 brd 10.89.0.255 scope global podman1
       valid_lft forever preferred_lft forever
    inet6 fe80::c458:40ff:fe53:2ea0/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
10: veth2@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman1 state UP group default qlen 1000
    link/ether 72:d9:a1:e1:12:aa brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::70d9:a1ff:fee1:12aa/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
11: veth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman1 state UP group default qlen 1000
    link/ether da:06:0d:41:b4:89 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::d806:dff:fe41:b489/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
14: veth1@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman1 state UP group default qlen 1000
    link/ether d2:a9:a4:a4:d7:b5 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::d0a9:a4ff:fea4:d7b5/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever

Container Routes

podman@HOST:~$ podman unshare --rootless-netns ip route
10.89.0.0/24 dev podman1 proto kernel scope link src 10.89.0.1 
172.30.1.0/24 dev ens19 proto kernel scope link metric 101 

Podman Networks

podman@HOST:~$ podman network ls
NETWORK ID    NAME        DRIVER
2f259bab93aa  podman      bridge
a6d68cbc095b  traefik     bridge
podman@HOST:~$ podman network inspect podman
[
     {
          "name": "podman",
          "id": "2f259bab93aaaaa2542ba43ef33eb990d0999ee1b9924b557b7be53c0b7a1bb9",
          "driver": "bridge",
          "network_interface": "podman0",
          "created": "2025-01-23T18:02:56.954719818+01:00",
          "subnets": [
               {
                    "subnet": "10.88.0.0/16",
                    "gateway": "10.88.0.1"
               }
          ],
          "ipv6_enabled": false,
          "internal": false,
          "dns_enabled": false,
          "ipam_options": {
               "driver": "host-local"
          },
          "containers": {}
     }
]
podman@HOST:~$ podman network inspect traefik
[
     {
          "name": "traefik",
          "id": "a6d68cbc095bcc6234b02ad5915d29427576e0b414267cc14a184f6a1c93dbf1",
          "driver": "bridge",
          "network_interface": "podman1",
          "created": "2024-08-02T20:41:39.670582333+02:00",
          "subnets": [
               {
                    "subnet": "10.89.0.0/24",
                    "gateway": "10.89.0.1"
               }
          ],
          "ipv6_enabled": false,
          "internal": false,
          "dns_enabled": true,
          "ipam_options": {
               "driver": "host-local"
          },
          "containers": {
               "33d38cf2f1b16edca275fafc90f1b96fd1e3171e0569079966e8ac2806c9fee6": {
                    "name": "docker-local-mirror-registry",
                    "interfaces": {
                         "eth0": {
                              "subnets": [
                                   {
                                        "ipnet": "10.89.0.9/24",
                                        "gateway": "10.89.0.1"
                                   }
                              ],
                              "mac_address": "e6:7d:af:f2:ee:f2"
                         }
                    }
               },
               "5f4aa4b7557ef2488da906b1ad8f4ed5fb466f89b256749250c3ae4269c2dc0e": {
                    "name": "traefik",
                    "interfaces": {
                         "eth0": {
                              "subnets": [
                                   {
                                        "ipnet": "10.89.0.12/24",
                                        "gateway": "10.89.0.1"
                                   }
                              ],
                              "mac_address": "0e:d3:fb:12:74:7b"
                         }
                    }
               },
               "f24cb1f788638be820a7eea59d8415a474c4ede704eac9892d9301a4834f509f": {
                    "name": "docker-local-mirror-auth",
                    "interfaces": {
                         "eth0": {
                              "subnets": [
                                   {
                                        "ipnet": "10.89.0.8/24",
                                        "gateway": "10.89.0.1"
                                   }
                              ],
                              "mac_address": "ce:d0:a0:c1:b8:10"
                         }
                    }
               }
          }
     }
]

Overall Host Addresses

podman@HOST:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether bc:24:11:a3:3f:4c brd ff:ff:ff:ff:ff:ff
    altname enp0s18
    inet 192.168.8.15/20 brd 192.168.15.255 scope global noprefixroute ens18
       valid_lft forever preferred_lft forever
    inet6 2XXX:XXXX:XXXX:1:c0be:cfe8:f3ca:aaee/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 86357sec preferred_lft 14357sec
    inet6 2XXX:XXXX:XXXX:1::8:15/64 scope global dynamic noprefixroute 
       valid_lft 2894sec preferred_lft 1544sec
    inet6 fe80::e137:2aca:490c:29b5/64 scope link 
       valid_lft forever preferred_lft forever
3: ens19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether bc:24:11:de:11:5e brd ff:ff:ff:ff:ff:ff
    altname enp0s19
    inet 172.30.1.2/24 brd 172.30.1.255 scope global noprefixroute ens19
       valid_lft forever preferred_lft forever
    inet6 fe80::c345:a190:d855:603b/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

Pasta Processes

podman@HOST:~$ ps aux | grep pasta
podman      1698  0.0  0.2  68500 16908 ?        Ss   Jan22   0:06 /usr/bin/pasta --config-net --pid /run/user/1002/networks/rootless-netns/rootless-netns-conn.pid --dns-forward 169.254.1.1 -t none -u none -T none -U none --no-map-gw --quiet --netns /run/user/1002/networks/rootless-netns/rootless-netns --map-guest-addr 169.254.1.2
podman    837018  0.0  0.0   6500  2168 pts/0    S+   18:03   0:00 grep --color=auto pasta

The only weird thing is that 172.30.1.2 is an IP Address for a connection to a NFS Server where I store the Docker Local Mirror Images.

I do NOT know why it shows up together with the Container Addresses / Routes.

@luckylinux
Copy link
Author

Weird ... after upgrade to traefik:v3.3 (which is currently 3.3.2) it seems to work for now

podman@HOST:~$ podman exec -it traefik /bin/sh
/ # apk update
fetch https://dl-cdn.alpinelinux.org/alpine/v3.21/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.21/community/x86_64/APKINDEX.tar.gz
v3.21.2-108-g0cb3267fd83 [https://dl-cdn.alpinelinux.org/alpine/v3.21/main]
v3.21.2-118-ga435ec4294e [https://dl-cdn.alpinelinux.org/alpine/v3.21/community]
OK: 25395 distinct packages available
/ # exit


podman@HOST:~$ podman exec -it docker-local-mirror-registry /bin/sh
/ # apk update
fetch https://dl-cdn.alpinelinux.org/alpine/v3.18/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.18/community/x86_64/APKINDEX.tar.gz
v3.18.11-22-ga848e812d4a [https://dl-cdn.alpinelinux.org/alpine/v3.18/main]
v3.18.11-22-ga848e812d4a [https://dl-cdn.alpinelinux.org/alpine/v3.18/community]
OK: 20070 distinct packages available
/ # exit

@Luap99
Copy link
Member

Luap99 commented Jan 23, 2025

How do the routes look on the host, one thing to note is that pasta picks the host interface with a default route by default but if there are no default routes or multiple ones then the outcome might not strictly be deterministic.

Based on your ip addr output we can see that pasta picked ens19 with the 172.30.1.0/24 address however it also picked the ipv6 addresses from the ens18 interface I presume. That alone is a bit unusual but should work like any other setup so I don't think that is the reason, maybe a contributing factor.

When you did the address/route dump it was in a state where it didn't work? Because that all looks totally valid, the routes seems to be there so a network is unreachable doesn't make a lot of sense to me.

We did has some netavark issues with nftables (new as of f41), we leaked port forwarding rules (and still are in same cases containers/netavark#1160) but that should not matter for outgoing traffic. You can check podman unshare --rootless-netns nft list ruleset to see if something bad got in there

@luckylinux
Copy link
Author

The default Route on the Host is via 192.168.1.1 and ens18 so if what you say it's true, then pasta / podman selected the wrong Route since it apparently decided to use ens19 for whatever Reason.

podman@HOST:~$ ip route
default via 192.168.1.1 dev ens18 src 192.168.8.15 metric 1002 
172.30.1.0/24 dev ens19 proto dhcp scope link src 172.30.1.2 metric 1003 
192.168.0.0/20 dev ens18 proto dhcp scope link src 192.168.8.15 metric 1002

Your last Commands that I posted in my Previous Message I run this Evening.

This Morning it was NOT working (unreachable).

No clue if BEFORE I ran these latest Commands it was working or not. I mean ... why should it just start working when it was giving issues from yesterday morning to this morning at least ?

Concerning nft I am really NOT familiar with it (lost a couple Days while trying to get NAT & Masquerade working on some ARM SBC then gave up ...).

This is the Output

table inet netavark {
	chain INPUT {
		type filter hook input priority filter; policy accept;
		ip saddr 10.89.0.0/24 meta l4proto { tcp, udp } th dport 53 accept
	}

	chain FORWARD {
		type filter hook forward priority filter; policy accept;
		ct state invalid drop
		jump NETAVARK-ISOLATION-1
		ip daddr 10.89.0.0/24 ct state established,related accept
		ip saddr 10.89.0.0/24 accept
	}

	chain POSTROUTING {
		type nat hook postrouting priority srcnat; policy accept;
		meta mark & 0x00002000 == 0x00002000 masquerade
		ip saddr 10.89.0.0/24 jump nv_a6d68cbc_10_89_0_0_nm24
	}

	chain PREROUTING {
		type nat hook prerouting priority dstnat; policy accept;
		fib daddr type local jump NETAVARK-HOSTPORT-DNAT
	}

	chain OUTPUT {
		type nat hook output priority dstnat; policy accept;
		fib daddr type local jump NETAVARK-HOSTPORT-DNAT
	}

	chain NETAVARK-HOSTPORT-DNAT {
		ip daddr 192.168.8.15 udp dport 443 jump nv_a6d68cbc_10_89_0_0_nm24_dnat
		ip daddr 192.168.8.15 tcp dport 80 jump nv_a6d68cbc_10_89_0_0_nm24_dnat
		ip daddr 192.168.8.15 tcp dport 443 jump nv_a6d68cbc_10_89_0_0_nm24_dnat
	}

	chain NETAVARK-HOSTPORT-SETMARK {
		meta mark set meta mark | 0x00002000
	}

	chain NETAVARK-ISOLATION-1 {
	}

	chain NETAVARK-ISOLATION-2 {
	}

	chain NETAVARK-ISOLATION-3 {
		oifname "podman1" drop
		jump NETAVARK-ISOLATION-2
	}

	chain nv_a6d68cbc_10_89_0_0_nm24 {
		ip daddr 10.89.0.0/24 accept
		ip daddr != 224.0.0.0/4 masquerade
	}

	chain nv_a6d68cbc_10_89_0_0_nm24_dnat {
		ip saddr 10.89.0.0/24 ip daddr 192.168.8.15 udp dport 443 jump NETAVARK-HOSTPORT-SETMARK
		ip saddr 127.0.0.1 ip daddr 192.168.8.15 udp dport 443 jump NETAVARK-HOSTPORT-SETMARK
		ip daddr 192.168.8.15 udp dport 443 dnat ip to 10.89.0.11:443
		ip saddr 10.89.0.0/24 ip daddr 192.168.8.15 tcp dport 80 jump NETAVARK-HOSTPORT-SETMARK
		ip saddr 127.0.0.1 ip daddr 192.168.8.15 tcp dport 80 jump NETAVARK-HOSTPORT-SETMARK
		ip daddr 192.168.8.15 tcp dport 80 dnat ip to 10.89.0.11:80
		ip saddr 10.89.0.0/24 ip daddr 192.168.8.15 tcp dport 443 jump NETAVARK-HOSTPORT-SETMARK
		ip saddr 127.0.0.1 ip daddr 192.168.8.15 tcp dport 443 jump NETAVARK-HOSTPORT-SETMARK
		ip daddr 192.168.8.15 tcp dport 443 dnat ip to 10.89.0.11:443
	}
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. network Networking related issue or feature
Projects
None yet
Development

No branches or pull requests

3 participants