Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pod containers are instantly terminated without errors #25009

Open
Xinayder opened this issue Jan 13, 2025 · 3 comments
Open

Pod containers are instantly terminated without errors #25009

Xinayder opened this issue Jan 13, 2025 · 3 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@Xinayder
Copy link

Xinayder commented Jan 13, 2025

Issue Description

I am trying to setup Authentik with rootless podman. I converted the docker-compose.yml file provided on their installation guide to quadlet using podlet compose --pod docker-compose.yml and tweaked the resulting files according to my needs.

However, whenever I try to start a container related to this pod, I get the following error message:
Job for <container>.service canceled.
The containers try to start up, then they are suddenly terminated with no error at all.

I tried checking systemd logs and there's not really much information why the containers are terminated upon startup, but some services have some interesting messages, like Failed to open cgroups file and no valid executable found for OCI runtime runsc: invalid argument

○ authentik-pod.service
     Loaded: loaded (/home/yggdrasil/.config/containers/systemd/authentik.pod; generated)
     Active: inactive (dead) since Mon 2025-01-13 23:11:01 CET; 6min ago
   Duration: 293ms
 Invocation: 3dfdacee1dda40d9a7d875323780d621
    Process: 35533 ExecStartPre=/usr/bin/podman pod create --infra-conmon-pidfile=/run/user/1001/authentik-pod.pid --pod-id-file=/run/user/1001/authentik-pod.pod-id --exit-policy=stop --replace --publish 9000:9000 --publish 9443:9443 --network pasta:-I,tap0,--ipv4-only,-a,10.0.42.20,-n,24,-g,10.0.42.2,--dns-forward,10.0.42.3,--no-ndp,--no-dhcpv6,--no-dhcp --infra-name systemd-authentik-infra --name systemd-authentik (code=exited, status=0/SUCCESS)
    Process: 35543 ExecStart=/usr/bin/podman pod start --pod-id-file=/run/user/1001/authentik-pod.pod-id (code=exited, status=0/SUCCESS)
    Process: 35622 ExecStop=/usr/bin/podman pod stop --pod-id-file=/run/user/1001/authentik-pod.pod-id --ignore --time=10 (code=exited, status=0/SUCCESS)
    Process: 35669 ExecStopPost=/usr/bin/podman pod rm --pod-id-file=/run/user/1001/authentik-pod.pod-id --ignore --force (code=exited, status=0/SUCCESS)
   Main PID: 35565 (code=exited, status=0/SUCCESS)
        CPU: 608ms

Jan 13 23:11:01 jotunheim podman[35543]: 2025-01-13 23:11:01.00643427 +0100 CET m=+0.119653129 container start 0422d3a7ae95cd8d8c5152c5c22fae1cac2ad5ddec5a6b749ab19c59dec902af (image=localhost/podman-pause:5.3.1-1733485830, name=systemd-authentik-infra, pod_id=3a77bbbbfce7bfd8f0197d4814cb9e42a428ad89c7f71bcfec161090abd401a6, PODMAN_SYSTEMD_UNIT=authentik-pod.service, io.buildah.version=1.38.0)
Jan 13 23:11:01 jotunheim podman[35543]: 2025-01-13 23:11:01.011223325 +0100 CET m=+0.124442184 pod start 3a77bbbbfce7bfd8f0197d4814cb9e42a428ad89c7f71bcfec161090abd401a6 (image=, name=systemd-authentik)
Jan 13 23:11:01 jotunheim authentik-pod[35543]: systemd-authentik
Jan 13 23:11:01 jotunheim systemd[1700]: Started authentik-pod.service.
Jan 13 23:11:01 jotunheim conmon[35565]: conmon 0422d3a7ae95cd8d8c51 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/user.slice/user-1001.slice/user@1001.service/user.slice/user-libpod_pod_3a77bbbbfce7bfd8f0197d4814cb9e42a428ad89c7f71bcfec161090abd401a6.slice/libpod-0422d3a7ae95cd8d8c5152c5c22fae1cac2ad5ddec5a6b749ab19c59dec902af.scope/container/memory.events
Jan 13 23:11:01 jotunheim podman[35622]: 2025-01-13 23:11:01.394098463 +0100 CET m=+0.061574585 pod stop 3a77bbbbfce7bfd8f0197d4814cb9e42a428ad89c7f71bcfec161090abd401a6 (image=, name=systemd-authentik)
Jan 13 23:11:01 jotunheim authentik-pod[35622]: systemd-authentik
Jan 13 23:11:01 jotunheim podman[35669]: 2025-01-13 23:11:01.574804369 +0100 CET m=+0.120784822 container remove 0422d3a7ae95cd8d8c5152c5c22fae1cac2ad5ddec5a6b749ab19c59dec902af (image=localhost/podman-pause:5.3.1-1733485830, name=systemd-authentik-infra, pod_id=3a77bbbbfce7bfd8f0197d4814cb9e42a428ad89c7f71bcfec161090abd401a6, io.buildah.version=1.38.0, PODMAN_SYSTEMD_UNIT=authentik-pod.service)
Jan 13 23:11:01 jotunheim podman[35669]: 2025-01-13 23:11:01.583902393 +0100 CET m=+0.129882886 pod remove 3a77bbbbfce7bfd8f0197d4814cb9e42a428ad89c7f71bcfec161090abd401a6 (image=, name=systemd-authentik)
Jan 13 23:11:01 jotunheim authentik-pod[35669]: 3a77bbbbfce7bfd8f0197d4814cb9e42a428ad89c7f71bcfec161090abd401a6
× redis.service
     Loaded: loaded (/home/yggdrasil/.config/containers/systemd/redis.container; generated)
     Active: failed (Result: exit-code) since Mon 2025-01-13 23:11:01 CET; 5min ago
 Invocation: 094ce68d00774524b1625f34e7dfd997
    Process: 35573 ExecStart=/usr/bin/podman --log-level=debug run --name redis --cidfile=/run/user/1001/redis.cid --replace --rm --cgroups=split --sdnotify=healthy -d -v systemd-redis:/data:Z --health-cmd redis-cli ping | grep PONG --health-interval 30s --health-retries 5 --health-start-period 20s --health-timeout 3s --pod-id-file /run/user/1001/authentik-pod.pod-id docker.io/arm64v8/redis:alpine --save 60 1 --loglevel warning (code=exited, status=1/FAILURE)
    Process: 35650 ExecStopPost=/usr/bin/podman --log-level=debug rm -v -f -i --cidfile=/run/user/1001/redis.cid (code=exited, status=0/SUCCESS)
   Main PID: 35573 (code=exited, status=1/FAILURE)
        CPU: 284ms

Jan 13 23:11:01 jotunheim redis[35650]: dab322af54ea8d3de9f3f15e15764a4cb3551a4f0fd2867cdab256b0cf10e414
Jan 13 23:11:01 jotunheim redis[35650]: time="2025-01-13T23:11:01+01:00" level=debug msg="Called rm.PersistentPostRunE(/usr/bin/podman --log-level=debug rm -v -f -i --cidfile=/run/user/1001/redis.cid)"
Jan 13 23:11:01 jotunheim redis[35650]: time="2025-01-13T23:11:01+01:00" level=debug msg="Shutting down engines"
Jan 13 23:11:01 jotunheim redis[35650]: time="2025-01-13T23:11:01+01:00" level=info msg="Received shutdown.Stop(), terminating!" PID=35650
Jan 13 23:11:01 jotunheim redis[35650]: time="2025-01-13T23:11:01+01:00" level=debug msg="Adding parallel job to stop container 0422d3a7ae95cd8d8c5152c5c22fae1cac2ad5ddec5a6b749ab19c59dec902af"
Jan 13 23:11:01 jotunheim podman[35650]: 2025-01-13 23:11:01.519349615 +0100 CET m=+0.135760593 pod stop 3a77bbbbfce7bfd8f0197d4814cb9e42a428ad89c7f71bcfec161090abd401a6 (image=, name=systemd-authentik)
Jan 13 23:11:01 jotunheim redis[35650]: time="2025-01-13T23:11:01+01:00" level=debug msg="Stopping ctr 0422d3a7ae95cd8d8c5152c5c22fae1cac2ad5ddec5a6b749ab19c59dec902af (timeout 10)"
Jan 13 23:11:01 jotunheim redis[35650]: time="2025-01-13T23:11:01+01:00" level=debug msg="Removing pod cgroup user.slice/user-1001.slice/user@1001.service/user.slice/user-libpod_pod_3a77bbbbfce7bfd8f0197d4814cb9e42a428ad89c7f71bcfec161090abd401a6.slice"
Jan 13 23:11:01 jotunheim systemd[1700]: redis.service: Failed with result 'exit-code'.
Jan 13 23:11:01 jotunheim systemd[1700]: Stopped redis.service.
× postgres.service
     Loaded: loaded (/home/yggdrasil/.config/containers/systemd/postgres.container; generated)
     Active: failed (Result: exit-code) since Mon 2025-01-13 23:21:37 CET; 26s ago
 Invocation: e7459cf725264e99a21e03d107e82180
    Process: 36132 ExecStart=/usr/bin/podman --log-level=debug run --name postgres --cidfile=/run/user/1001/postgres.cid --replace --rm --cgroups=split --sdnotify=healthy -d -v /home/yggdrasil/container_volumes/postgres/data:/var/lib/postgresql/data:Z --env-file /home/yggdrasil/authentik.env --health-cmd pg_isready -d ${POSTGRES_DB} -U ${POSTGRES_USER} --health-interval 30s --health-retries 5 --health-start-period 20s --health-timeout 5s --pod-id-file /run/user/1001/authentik-pod.pod-id docker.io/arm64v8/postgres:16-alpine (code=exited, status=1/FAILURE)
    Process: 36184 ExecStopPost=/usr/bin/podman --log-level=debug rm -v -f -i --cidfile=/run/user/1001/postgres.cid (code=exited, status=0/SUCCESS)
   Main PID: 36132 (code=exited, status=1/FAILURE)
        CPU: 170ms

Jan 13 23:21:37 jotunheim postgres[36184]: time="2025-01-13T23:21:37+01:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument"
Jan 13 23:21:37 jotunheim postgres[36184]: time="2025-01-13T23:21:37+01:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument"
Jan 13 23:21:37 jotunheim postgres[36184]: time="2025-01-13T23:21:37+01:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument"
Jan 13 23:21:37 jotunheim postgres[36184]: time="2025-01-13T23:21:37+01:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\""
Jan 13 23:21:37 jotunheim postgres[36184]: time="2025-01-13T23:21:37+01:00" level=info msg="Setting parallel job count to 13"
Jan 13 23:21:37 jotunheim postgres[36184]: time="2025-01-13T23:21:37+01:00" level=debug msg="Called rm.PersistentPostRunE(/usr/bin/podman --log-level=debug rm -v -f -i --cidfile=/run/user/1001/postgres.cid)"
Jan 13 23:21:37 jotunheim postgres[36184]: time="2025-01-13T23:21:37+01:00" level=debug msg="Shutting down engines"
Jan 13 23:21:37 jotunheim postgres[36184]: time="2025-01-13T23:21:37+01:00" level=info msg="Received shutdown.Stop(), terminating!" PID=36184
Jan 13 23:21:37 jotunheim systemd[1700]: postgres.service: Failed with result 'exit-code'.
Jan 13 23:21:37 jotunheim systemd[1700]: Stopped postgres.service.

Systemd unit files:

# authentik-postgresql.container
[Container]
ContainerName=postgres
EnvironmentFile=%h/authentik.env
HealthCmd=pg_isready -d ${POSTGRES_DB} -U ${POSTGRES_USER}
HealthInterval=30s
HealthRetries=5
HealthStartPeriod=20s
HealthTimeout=5s
Image=docker.io/arm64v8/postgres:16-alpine
Pod=authentik.pod
Volume=%h/container_volumes/postgres/data:/var/lib/postgresql/data:Z
Notify=healthy
GlobalArgs=--log-level=debug

[Service]
Restart=always

[Install]
WantedBy=default.target
# authentik-redis.container
[Container]
ContainerName=redis
Exec=--save 60 1 --loglevel warning
HealthCmd=redis-cli ping | grep PONG
HealthInterval=30s
HealthRetries=5
HealthStartPeriod=20s
HealthTimeout=3s
Image=docker.io/arm64v8/redis:alpine
Pod=authentik.pod
Volume=%h/container_volumes/redis/data:/data:Z
Notify=healthy
GlobalArgs=--log-level=debug

[Service]
Restart=always

[Install]
WantedBy=default.target
# authentik.pod
[Pod]
PublishPort=9000:9000
PublishPort=9443:9443
Network=pasta:-I,tap0,--ipv4-only,-a,10.0.42.20,-n,24,-g,10.0.42.2,--dns-forward,10.0.42.3,--no-ndp,--no-dhcpv6,--no-dhcp

Steps to reproduce the issue

Steps to reproduce the issue

  1. Install the provided unit files
  2. Reload systemd
  3. Start postgres, redis or authentik-pod

Describe the results you received

The containers are terminated right upon startup

Describe the results you expected

The containers should start up successfully

podman info output

host:
  arch: arm64
  buildahVersion: 1.38.0
  cgroupControllers:
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.12-1.3.aarch64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.12, commit: unknown'
  cpuUtilization:
    idlePercent: 99.88
    systemPercent: 0.06
    userPercent: 0.05
  cpus: 4
  databaseBackend: sqlite
  distribution:
    distribution: opensuse-microos
    version: "20250106"
  eventLogger: journald
  freeLocks: 2044
  hostname: jotunheim
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1003
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
  kernel: 6.12.8-1-default
  linkmode: dynamic
  logDriver: journald
  memFree: 3689070592
  memTotal: 8113238016
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.13.1-1.1.aarch64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.13.1
    package: netavark-1.13.1-1.1.aarch64
    path: /usr/libexec/podman/netavark
    version: netavark 1.13.1
  ociRuntime:
    name: crun
    package: crun-1.19-1.1.aarch64
    path: /usr/bin/crun
    version: |-
      crun version 1.19
      commit: db31c42ac46e20b5527f5339dcbf6f023fcd539c
      rundir: /run/user/1001/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-20241211.09478d5-1.1.aarch64
    version: |
      pasta 20241211.09478d5-1.1
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1001/podman/podman.sock
  rootlessNetworkCmd: pasta
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.3.1-1.2.aarch64
    version: |-
      slirp4netns version 1.3.1
      commit: unknown
      libslirp: 4.8.0
      SLIRP_CONFIG_VERSION_MAX: 5
      libseccomp: 2.5.5
  swapFree: 8650727424
  swapTotal: 8650727424
  uptime: 124h 57m 6.00s (Approximately 5.17 days)
  variant: v8
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.opensuse.org
  - registry.suse.com
  - docker.io
store:
  configFile: /home/yggdrasil/.config/containers/storage.conf
  containerStore:
    number: 2
    paused: 0
    running: 1
    stopped: 1
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/yggdrasil/.local/share/containers/storage
  graphRootAllocated: 24696061952
  graphRootUsed: 24217546752
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 1259
  runRoot: /run/user/1001/containers
  transientStore: false
  volumePath: /home/yggdrasil/.local/share/containers/storage/volumes
version:
  APIVersion: 5.3.1
  Built: 1733485830
  BuiltTime: Fri Dec  6 12:50:30 2024
  GitCommit: ""
  GoVersion: go1.23.4
  Os: linux
  OsArch: linux/arm64
  Version: 5.3.1

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

Running on a VPS with openSUSE Micro OS

Additional information

No response

@Xinayder Xinayder added the kind/bug Categorizes issue or PR as related to a bug. label Jan 13, 2025
@giuseppe
Copy link
Member

does the container work well if you launch it manually? e.g.:

/usr/bin/podman --log-level=debug run --name postgres --replace --rm --sdnotify=healthy -d -v /home/yggdrasil/container_volumes/postgres/data:/var/lib/postgresql/data:Z --env-file /home/yggdrasil/authentik.env --health-cmd pg_isready -d ${POSTGRES_DB} -U ${POSTGRES_USER} --health-interval 30s --health-retries 5 --health-start-period 20s --health-timeout 5s --pod-id-file /run/user/1001/authentik-pod.pod-id docker.io/arm64v8/postgres:16-alpine

@Xinayder
Copy link
Author

Xinayder commented Jan 14, 2025

Not really. I created the pod manually, then ran the command from the systemd unit for postgres:

yggdrasil@jotunheim:~/.config/containers/systemd> /usr/bin/podman --log-level=debug run --name postgres --replace --rm --sdnotify=healthy -d -v /home/yggdrasil/container_volumes/postgres/data:/var/lib/postgresql/data:Z --env-file /home/yggdrasil/authentik.env --health-cmd "pg_isready -d authentik -U authentik" --health-interval 30s --health-retries 5 --health-start-period 20s --health-timeout 5s --pod-id-file /run/user/1001/authentik-pod.pod-id docker.io/arm64v8/postgres:16-alpine
INFO[0000] /usr/bin/podman filtering at log level debug 
DEBU[0000] Called run.PersistentPreRunE(/usr/bin/podman --log-level=debug run --name postgres --replace --rm --sdnotify=healthy -d -v /home/yggdrasil/container_volumes/postgres/data:/var/lib/postgresql/data:Z --env-file /home/yggdrasil/authentik.env --health-cmd pg_isready -d authentik -U authentik --health-interval 30s --health-retries 5 --health-start-period 20s --health-timeout 5s --pod-id-file /run/user/1001/authentik-pod.pod-id docker.io/arm64v8/postgres:16-alpine) 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
INFO[0000] Using sqlite as database backend             
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/yggdrasil/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1001/containers     
DEBU[0000] Using static dir /home/yggdrasil/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1001/libpod/tmp      
DEBU[0000] Using volume path /home/yggdrasil/.local/share/containers/storage/volumes 
DEBU[0000] Using transient store: false                 
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Cached value indicated that metacopy is not being used 
DEBU[0000] Cached value indicated that native-diff is usable 
DEBU[0000] backingFs=btrfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false 
DEBU[0000] Initializing event backend journald          
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument 
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument 
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument 
DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument 
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument 
DEBU[0000] Using OCI runtime "/usr/bin/crun"            
INFO[0000] Setting parallel job count to 13             
DEBU[0000] Pulling image docker.io/arm64v8/postgres:16-alpine (policy: missing) 
DEBU[0000] Looking up image "docker.io/arm64v8/postgres:16-alpine" in local containers storage 
DEBU[0000] Normalized platform linux/arm64 to {arm64 linux  [] } 
DEBU[0000] Trying "docker.io/arm64v8/postgres:16-alpine" ... 
DEBU[0000] parsed reference into "[overlay@/home/yggdrasil/.local/share/containers/storage+/run/user/1001/containers]@650d53b3db1d2b5949c9f25be20799848807412357bced8a8ac6da0f1884120c" 
DEBU[0000] Found image "docker.io/arm64v8/postgres:16-alpine" as "docker.io/arm64v8/postgres:16-alpine" in local containers storage 
DEBU[0000] Found image "docker.io/arm64v8/postgres:16-alpine" as "docker.io/arm64v8/postgres:16-alpine" in local containers storage ([overlay@/home/yggdrasil/.local/share/containers/storage+/run/user/1001/containers]@650d53b3db1d2b5949c9f25be20799848807412357bced8a8ac6da0f1884120c) 
DEBU[0000] exporting opaque data as blob "sha256:650d53b3db1d2b5949c9f25be20799848807412357bced8a8ac6da0f1884120c" 
DEBU[0000] Looking up image "docker.io/arm64v8/postgres:16-alpine" in local containers storage 
DEBU[0000] Normalized platform linux/arm64 to {arm64 linux  [] } 
DEBU[0000] Trying "docker.io/arm64v8/postgres:16-alpine" ... 
DEBU[0000] parsed reference into "[overlay@/home/yggdrasil/.local/share/containers/storage+/run/user/1001/containers]@650d53b3db1d2b5949c9f25be20799848807412357bced8a8ac6da0f1884120c" 
DEBU[0000] Found image "docker.io/arm64v8/postgres:16-alpine" as "docker.io/arm64v8/postgres:16-alpine" in local containers storage 
DEBU[0000] Found image "docker.io/arm64v8/postgres:16-alpine" as "docker.io/arm64v8/postgres:16-alpine" in local containers storage ([overlay@/home/yggdrasil/.local/share/containers/storage+/run/user/1001/containers]@650d53b3db1d2b5949c9f25be20799848807412357bced8a8ac6da0f1884120c) 
DEBU[0000] exporting opaque data as blob "sha256:650d53b3db1d2b5949c9f25be20799848807412357bced8a8ac6da0f1884120c" 
DEBU[0000] User mount /home/yggdrasil/container_volumes/postgres/data:/var/lib/postgresql/data options [Z] 
DEBU[0000] Looking up image "docker.io/arm64v8/postgres:16-alpine" in local containers storage 
DEBU[0000] Normalized platform linux/arm64 to {arm64 linux  [] } 
DEBU[0000] Trying "docker.io/arm64v8/postgres:16-alpine" ... 
DEBU[0000] parsed reference into "[overlay@/home/yggdrasil/.local/share/containers/storage+/run/user/1001/containers]@650d53b3db1d2b5949c9f25be20799848807412357bced8a8ac6da0f1884120c" 
DEBU[0000] Found image "docker.io/arm64v8/postgres:16-alpine" as "docker.io/arm64v8/postgres:16-alpine" in local containers storage 
DEBU[0000] Found image "docker.io/arm64v8/postgres:16-alpine" as "docker.io/arm64v8/postgres:16-alpine" in local containers storage ([overlay@/home/yggdrasil/.local/share/containers/storage+/run/user/1001/containers]@650d53b3db1d2b5949c9f25be20799848807412357bced8a8ac6da0f1884120c) 
DEBU[0000] exporting opaque data as blob "sha256:650d53b3db1d2b5949c9f25be20799848807412357bced8a8ac6da0f1884120c" 
DEBU[0000] Inspecting image 650d53b3db1d2b5949c9f25be20799848807412357bced8a8ac6da0f1884120c 
DEBU[0000] exporting opaque data as blob "sha256:650d53b3db1d2b5949c9f25be20799848807412357bced8a8ac6da0f1884120c" 
DEBU[0000] Inspecting image 650d53b3db1d2b5949c9f25be20799848807412357bced8a8ac6da0f1884120c 
DEBU[0000] Inspecting image 650d53b3db1d2b5949c9f25be20799848807412357bced8a8ac6da0f1884120c 
DEBU[0000] Inspecting image 650d53b3db1d2b5949c9f25be20799848807412357bced8a8ac6da0f1884120c 
DEBU[0000] Image has volume at "/var/lib/postgresql/data" 
DEBU[0000] Adding anonymous image volume at "/var/lib/postgresql/data" 
DEBU[0000] using systemd mode: false                    
DEBU[0000] adding container to pod systemd-authentik    
DEBU[0000] New container has a health check             
DEBU[0000] setting container name postgres              
DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json" 
INFO[0000] Sysctl net.ipv4.ping_group_range=0 0 ignored in containers.conf, since Network Namespace set to host 
DEBU[0000] Adding mount /proc                           
DEBU[0000] Adding mount /dev                            
DEBU[0000] Adding mount /dev/pts                        
DEBU[0000] Adding mount /dev/mqueue                     
DEBU[0000] Adding mount /sys                            
DEBU[0000] Adding mount /sys/fs/cgroup                  
DEBU[0000] Allocated lock 6 for container b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a 
DEBU[0000] exporting opaque data as blob "sha256:650d53b3db1d2b5949c9f25be20799848807412357bced8a8ac6da0f1884120c" 
DEBU[0000] Cached value indicated that idmapped mounts for overlay are not supported 
DEBU[0000] Check for idmapped mounts support            
DEBU[0000] Created container "b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a" 
DEBU[0000] Container "b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a" has work directory "/home/yggdrasil/.local/share/containers/storage/overlay-containers/b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a/userdata" 
DEBU[0000] Container "b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a" has run directory "/run/user/1001/containers/overlay-containers/b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a/userdata" 
DEBU[0000] Strongconnecting node 10a4006558b6cb9928bafcfb88772d92001b40cab6fd9f82069e68be404190ca 
DEBU[0000] Pushed 10a4006558b6cb9928bafcfb88772d92001b40cab6fd9f82069e68be404190ca onto stack 
DEBU[0000] Finishing node 10a4006558b6cb9928bafcfb88772d92001b40cab6fd9f82069e68be404190ca. Popped 10a4006558b6cb9928bafcfb88772d92001b40cab6fd9f82069e68be404190ca off stack 
DEBU[0000] overlay: mount_data=lowerdir=/home/yggdrasil/.local/share/containers/storage/overlay/l/TO724I4QHQUHOWHEAHXP2QAZNF,upperdir=/home/yggdrasil/.local/share/containers/storage/overlay/e7f838cbe0de558291ea8b59132b9edcf1d14b44c3535decb265ca9fa0ba9e02/diff,workdir=/home/yggdrasil/.local/share/containers/storage/overlay/e7f838cbe0de558291ea8b59132b9edcf1d14b44c3535decb265ca9fa0ba9e02/work,userxattr 
DEBU[0000] Made network namespace at /run/user/1001/netns/netns-ec5a2d6f-1736-b16e-125b-57e634b5c883 for container 10a4006558b6cb9928bafcfb88772d92001b40cab6fd9f82069e68be404190ca 
DEBU[0000] pasta arguments: --config-net -I tap0 --ipv4-only -a 10.0.42.20 -n 24 -g 10.0.42.2 --dns-forward 10.0.42.3 --no-ndp --no-dhcpv6 --no-dhcp -t 9000-9000:9000-9000 -t 9443-9443:9443-9443 -u none -T none -U none --no-map-gw --quiet --netns /run/user/1001/netns/netns-ec5a2d6f-1736-b16e-125b-57e634b5c883 --map-guest-addr 169.254.1.2 
DEBU[0000] Mounted container "10a4006558b6cb9928bafcfb88772d92001b40cab6fd9f82069e68be404190ca" at "/home/yggdrasil/.local/share/containers/storage/overlay/e7f838cbe0de558291ea8b59132b9edcf1d14b44c3535decb265ca9fa0ba9e02/merged" 
DEBU[0000] Created root filesystem for container 10a4006558b6cb9928bafcfb88772d92001b40cab6fd9f82069e68be404190ca at /home/yggdrasil/.local/share/containers/storage/overlay/e7f838cbe0de558291ea8b59132b9edcf1d14b44c3535decb265ca9fa0ba9e02/merged 
DEBU[0000] /proc/sys/crypto/fips_enabled does not contain '1', not adding FIPS mode bind mounts 
DEBU[0000] Setting Cgroups for container 10a4006558b6cb9928bafcfb88772d92001b40cab6fd9f82069e68be404190ca to user-libpod_pod_e56caed670e029c7db8c6fdf9b39428bfc087d10fd3bb533c1b39b40a45ae990.slice:libpod:10a4006558b6cb9928bafcfb88772d92001b40cab6fd9f82069e68be404190ca 
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d 
DEBU[0000] Workdir "/" resolved to host path "/home/yggdrasil/.local/share/containers/storage/overlay/e7f838cbe0de558291ea8b59132b9edcf1d14b44c3535decb265ca9fa0ba9e02/merged" 
DEBU[0000] Created OCI spec for container 10a4006558b6cb9928bafcfb88772d92001b40cab6fd9f82069e68be404190ca at /home/yggdrasil/.local/share/containers/storage/overlay-containers/10a4006558b6cb9928bafcfb88772d92001b40cab6fd9f82069e68be404190ca/userdata/config.json 
DEBU[0000] Created cgroup path user.slice/user-libpod_pod_e56caed670e029c7db8c6fdf9b39428bfc087d10fd3bb533c1b39b40a45ae990.slice for parent user.slice and name libpod_pod_e56caed670e029c7db8c6fdf9b39428bfc087d10fd3bb533c1b39b40a45ae990 
DEBU[0000] Created cgroup user.slice/user-libpod_pod_e56caed670e029c7db8c6fdf9b39428bfc087d10fd3bb533c1b39b40a45ae990.slice 
DEBU[0000] Got pod cgroup as user.slice/user-1001.slice/user@1001.service/user.slice/user-libpod_pod_e56caed670e029c7db8c6fdf9b39428bfc087d10fd3bb533c1b39b40a45ae990.slice 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c 10a4006558b6cb9928bafcfb88772d92001b40cab6fd9f82069e68be404190ca -u 10a4006558b6cb9928bafcfb88772d92001b40cab6fd9f82069e68be404190ca -r /usr/bin/crun -b /home/yggdrasil/.local/share/containers/storage/overlay-containers/10a4006558b6cb9928bafcfb88772d92001b40cab6fd9f82069e68be404190ca/userdata -p /run/user/1001/containers/overlay-containers/10a4006558b6cb9928bafcfb88772d92001b40cab6fd9f82069e68be404190ca/userdata/pidfile -n systemd-authentik-infra --exit-dir /run/user/1001/libpod/tmp/exits --persist-dir /run/user/1001/libpod/tmp/persist/10a4006558b6cb9928bafcfb88772d92001b40cab6fd9f82069e68be404190ca --full-attach -s -l journald --log-level debug --syslog --conmon-pidfile /run/user/1001/authentik-pod.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/yggdrasil/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1001/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1001/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /home/yggdrasil/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --stopped-only --exit-command-arg 10a4006558b6cb9928bafcfb88772d92001b40cab6fd9f82069e68be404190ca]"
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

DEBU[0000] Received: 42706                              
INFO[0000] Got Conmon PID as 42704                      
DEBU[0000] Created container 10a4006558b6cb9928bafcfb88772d92001b40cab6fd9f82069e68be404190ca in OCI runtime 
DEBU[0000] Starting container 10a4006558b6cb9928bafcfb88772d92001b40cab6fd9f82069e68be404190ca with command [/catatonit -P] 
DEBU[0000] Started container 10a4006558b6cb9928bafcfb88772d92001b40cab6fd9f82069e68be404190ca 
DEBU[0000] Notify sent successfully                     
DEBU[0000] Cached value indicated that volatile is being used 
DEBU[0000] overlay: mount_data=lowerdir=/home/yggdrasil/.local/share/containers/storage/overlay/l/KWDDDGPZKUIV7I7XSJ67N42VNH:/home/yggdrasil/.local/share/containers/storage/overlay/l/K5EW3GIFV3MKYRDE3MXBUUAIQB:/home/yggdrasil/.local/share/containers/storage/overlay/l/LCV6ZDYPUQXK23KAPSOYEBSB7N:/home/yggdrasil/.local/share/containers/storage/overlay/l/DA5XGKBQ3GE4T2TK2B4VPVKFDZ:/home/yggdrasil/.local/share/containers/storage/overlay/l/QYLTFPHCBHXLHJ5DAPG52LHP3H:/home/yggdrasil/.local/share/containers/storage/overlay/l/APBZKDZOFR6ODP6TXCUDZ5KYNM:/home/yggdrasil/.local/share/containers/storage/overlay/l/GIUX36B2XO72TROIYJ6A6RTREU:/home/yggdrasil/.local/share/containers/storage/overlay/l/NVC4MYO5RBLQELFROULNLSLYOX:/home/yggdrasil/.local/share/containers/storage/overlay/l/YBBIJTEAMIOAKLLQ7LR3GFN65A:/home/yggdrasil/.local/share/containers/storage/overlay/l/FLTVHYGG4S5EWRWJDS7OX3KZVR:/home/yggdrasil/.local/share/containers/storage/overlay/l/RQFNOS7EQ5V2OFFXUSA2RXA2YO,upperdir=/home/yggdrasil/.local/share/containers/storage/overlay/e3a230de603c347b8bf3bc3126ee8f49743d0e61a23603829b5d6ef61814da3b/diff,workdir=/home/yggdrasil/.local/share/containers/storage/overlay/e3a230de603c347b8bf3bc3126ee8f49743d0e61a23603829b5d6ef61814da3b/work,userxattr,volatile 
DEBU[0000] Mounted container "b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a" at "/home/yggdrasil/.local/share/containers/storage/overlay/e3a230de603c347b8bf3bc3126ee8f49743d0e61a23603829b5d6ef61814da3b/merged" 
DEBU[0000] Created root filesystem for container b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a at /home/yggdrasil/.local/share/containers/storage/overlay/e3a230de603c347b8bf3bc3126ee8f49743d0e61a23603829b5d6ef61814da3b/merged 
DEBU[0000] /proc/sys/crypto/fips_enabled does not contain '1', not adding FIPS mode bind mounts 
DEBU[0000] Setting Cgroups for container b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a to user-libpod_pod_e56caed670e029c7db8c6fdf9b39428bfc087d10fd3bb533c1b39b40a45ae990.slice:libpod:b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a 
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d 
DEBU[0000] Workdir "/" resolved to host path "/home/yggdrasil/.local/share/containers/storage/overlay/e3a230de603c347b8bf3bc3126ee8f49743d0e61a23603829b5d6ef61814da3b/merged" 
DEBU[0000] Created OCI spec for container b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a at /home/yggdrasil/.local/share/containers/storage/overlay-containers/b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a/userdata/config.json 
DEBU[0000] Created cgroup path user.slice/user-libpod_pod_e56caed670e029c7db8c6fdf9b39428bfc087d10fd3bb533c1b39b40a45ae990.slice for parent user.slice and name libpod_pod_e56caed670e029c7db8c6fdf9b39428bfc087d10fd3bb533c1b39b40a45ae990 
DEBU[0000] Created cgroup user.slice/user-libpod_pod_e56caed670e029c7db8c6fdf9b39428bfc087d10fd3bb533c1b39b40a45ae990.slice 
DEBU[0000] Got pod cgroup as user.slice/user-1001.slice/user@1001.service/user.slice/user-libpod_pod_e56caed670e029c7db8c6fdf9b39428bfc087d10fd3bb533c1b39b40a45ae990.slice 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a -u b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a -r /usr/bin/crun -b /home/yggdrasil/.local/share/containers/storage/overlay-containers/b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a/userdata -p /run/user/1001/containers/overlay-containers/b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a/userdata/pidfile -n postgres --exit-dir /run/user/1001/libpod/tmp/exits --persist-dir /run/user/1001/libpod/tmp/persist/b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a --full-attach -s -l journald --log-level debug --syslog --conmon-pidfile /run/user/1001/containers/overlay-containers/b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/yggdrasil/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1001/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1001/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /home/yggdrasil/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --stopped-only --exit-command-arg --rm --exit-command-arg b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a]"
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

DEBU[0000] Received: 42712                              
INFO[0000] Got Conmon PID as 42710                      
DEBU[0000] Created container b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a in OCI runtime 
DEBU[0000] creating systemd-transient files: systemd-run [--property LogLevelMax=notice --user --setenv=PATH=/home/yggdrasil/.local/bin:/home/yggdrasil/bin:/usr/local/bin:/usr/bin:/bin --unit b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a-2cc0a022eb4a3200 --on-unit-inactive=30s --timer-property=AccuracySec=1s /usr/bin/podman --log-level=debug --syslog healthcheck run b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a] 
DEBU[0000] Starting container b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a with command [docker-entrypoint.sh postgres] 
DEBU[0000] Started container b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a 
DEBU[0000] Notify sent successfully                     
b85c5094647eb8e27d2f4992d84a346013a4e14d52c011ce10baf5100a1bef3a
DEBU[0000] Called run.PersistentPostRunE(/usr/bin/podman --log-level=debug run --name postgres --replace --rm --sdnotify=healthy -d -v /home/yggdrasil/container_volumes/postgres/data:/var/lib/postgresql/data:Z --env-file /home/yggdrasil/authentik.env --health-cmd pg_isready -d authentik -U authentik --health-interval 30s --health-retries 5 --health-start-period 20s --health-timeout 5s --pod-id-file /run/user/1001/authentik-pod.pod-id docker.io/arm64v8/postgres:16-alpine) 
DEBU[0000] Shutting down engines                        
INFO[0000] Received shutdown.Stop(), terminating!        PID=42683

I'm not sure if the container is failing because of this error, or something else:
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

EDIT: the Redis container fails with the same error.

@Luap99
Copy link
Member

Luap99 commented Jan 20, 2025

Did you check the full container logs in the journal? From your output it simply looks like your container exits right away.

IF you runt he command manually remove the -d then it should show you the container output directly

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants