Name Theory_2922-4896976-677_0
Workunit 239303040
Created 21 Feb 2026, 12:22:46 UTC
Sent 21 Feb 2026, 22:46:38 UTC
Report deadline 4 Mar 2026, 22:46:38 UTC
Received 23 Feb 2026, 2:27:23 UTC
Server state Over
Outcome Success
Client state Done
Exit status 0 (0x00000000)
Computer ID 10864216
Run time 38 min 42 sec
CPU time 34 min 40 sec
Priority 0
Validate state Valid
Credit 45.15
Device peak FLOPS 8.40 GFLOPS
Application version Theory Simulation v302.10 (docker)
windows_x86_64
Peak working set size 13.45 MB
Peak swap size 2.71 MB
Peak disk usage 1.38 MB

Stderr output

<core_client_version>8.2.8</core_client_version>
<![CDATA[
<stderr_txt>
ats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
14724.38% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 13 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 14 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
14385.85% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 14 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
14218.28% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 14 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
14054.93% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 14 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
13895.49% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 14 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
13740.07% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 15 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
13588.31% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 15 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
13440.16% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 15 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
13295.51% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 15 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
13154.24% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 15 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
13016.16% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 15 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
12881.31% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 16 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
12754.38% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 16 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 16 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
12501.31% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 16 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
12377.71% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 16 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
12256.87% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 16 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
12138.21% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 17 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
12021.26% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 17 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
11906.51% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 17 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
11794.05% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 17 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
11688.01% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 17 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
11580.05% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 18 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
11474.24% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 18 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
11370.47% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9` failed: exit status 1
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 18 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS         PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Up 18 minutes  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: stats --no-stream  --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
11171.03% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "exited" is not running, can't pause: container state improper
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Error: "add18492b9ed6a418c4bd813e52332e2e5ea707cedd904536f63a5dc46f844e9" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0"
program: podman
command output:
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS                    PORTS       NAMES
add18492b9ed  localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest  /bin/sh -c ./entr...  24 hours ago  Exited (0) 3 seconds ago  80/tcp      boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: logs boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
Got a wpad.dat from lhchomeproxy.{cern.ch|fnal.gov}
Will use proxies from there for CVMFS and Frontier
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION   HOST                           PROXY
2.13.3.0  http://s1bnl-cvmfs.openhtc.io  DIRECT
******************************************************************
                        IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy:  DIRECT
===> [runRivet] Sun Feb 22 00:18:47 UTC 2026 [boinc pp z1j 7000 125 - pythia6 6.428 390 100000 677]

No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=9860
job: logsize=60 k
job: times=
0m0.000s 0m0.006s
19m25.724s 0m31.434s
job: cpuusage=1197
Job Finished
Filesystem      Used Use% Mounted on
cvmfs2          201K   1% /cvmfs/cvmfs-config.cern.ch
cvmfs2           22M   1% /cvmfs/alice.cern.ch
cvmfs2           17M   1% /cvmfs/grid.cern.ch
cvmfs2          675M  17% /cvmfs/sft.cern.ch
total           713M   5% -
boinc_shutdown called with exit code 0
sd_delay: 0

Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION   HOST                           PROXY
2.13.3.0  http://s1bnl-cvmfs.openhtc.io  DIRECT
******************************************************************
                        IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy:  DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=9860
job: logsize=60 k
job: times=
0m0.000s 0m0.005s
18m16.211s 0m28.169s
job: cpuusage=1124
===> [runRivet] Sun Feb 22 16:30:48 UTC 2026 [boinc pp z1j 7000 125 - pythia6 6.428 390 100000 677]

Job Finished
Filesystem      Used Use% Mounted on
cvmfs2          201K   1% /cvmfs/cvmfs-config.cern.ch
cvmfs2           43M   2% /cvmfs/alice.cern.ch
cvmfs2           32M   1% /cvmfs/grid.cern.ch
cvmfs2          675M  17% /cvmfs/sft.cern.ch
total           750M   5% -
boinc_shutdown called with exit code 0
sd_delay: 0

Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION   HOST                           PROXY
===> [runRivet] Sun Feb 22 19:49:58 UTC 2026 [boinc pp z1j 7000 125 - pythia6 6.428 390 100000 677]

===> [runRivet] Sun Feb 22 20:13:55 UTC 2026 [boinc pp z1j 7000 125 - pythia6 6.428 390 100000 677]

===> [runRivet] Sun Feb 22 21:54:32 UTC 2026 [boinc pp z1j 7000 125 - pythia6 6.428 390 100000 677]

===> [runRivet] Sun Feb 22 22:34:08 UTC 2026 [boinc pp z1j 7000 125 - pythia6 6.428 390 100000 677]

===> [runRivet] Sun Feb 22 23:40:47 UTC 2026 [boinc pp z1j 7000 125 - pythia6 6.428 390 100000 677]

2.13.3.0  http://s1bnl-cvmfs.openhtc.io  DIRECT
******************************************************************
                        IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy:  DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=9860
job: logsize=60 k
job: times=
0m0.000s 0m0.005s
19m42.970s 0m30.034s
job: cpuusage=1213
Job Finished
Filesystem      Used Use% Mounted on
cvmfs2          201K   1% /cvmfs/cvmfs-config.cern.ch
cvmfs2           43M   2% /cvmfs/alice.cern.ch
cvmfs2           47M   2% /cvmfs/grid.cern.ch
cvmfs2          675M  17% /cvmfs/sft.cern.ch
total           765M   5% -
boinc_shutdown called with exit code 0
sd_delay: 0

Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION   HOST                           PROXY
2.13.3.0  http://s1bnl-cvmfs.openhtc.io  DIRECT
******************************************************************
                        IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy:  DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=9860
job: logsize=60 k
job: times=
0m0.003s 0m0.003s
20m24.892s 0m29.351s
job: cpuusage=1254
Job Finished
Filesystem      Used Use% Mounted on
cvmfs2          201K   1% /cvmfs/cvmfs-config.cern.ch
cvmfs2           43M   2% /cvmfs/alice.cern.ch
cvmfs2           47M   2% /cvmfs/grid.cern.ch
cvmfs2          675M  17% /cvmfs/sft.cern.ch
total           765M   5% -
boinc_shutdown called with exit code 0
sd_delay: 0

Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION   HOST                           PROXY
2.13.3.0  http://s1bnl-cvmfs.openhtc.io  DIRECT
******************************************************************
                        IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy:  DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=9860
job: logsize=60 k
job: times=
0m0.000s 0m0.005s
19m12.806s 0m30.464s
job: cpuusage=1183
Job Finished
Filesystem      Used Use% Mounted on
cvmfs2          201K   1% /cvmfs/cvmfs-config.cern.ch
cvmfs2           43M   2% /cvmfs/alice.cern.ch
cvmfs2           47M   2% /cvmfs/grid.cern.ch
cvmfs2          675M  17% /cvmfs/sft.cern.ch
total           765M   5% -
boinc_shutdown called with exit code 0
sd_delay: 0

Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION   HOST                           PROXY
2.13.3.0  http://s1bnl-cvmfs.openhtc.io  DIRECT
******************************************************************
                        IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy:  DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=9860
job: logsize=60 k
job: times=
0m0.000s 0m0.007s
20m12.987s 0m31.586s
job: cpuusage=1245
Job Finished
Filesystem      Used Use% Mounted on
cvmfs2          201K   1% /cvmfs/cvmfs-config.cern.ch
cvmfs2           43M   2% /cvmfs/alice.cern.ch
cvmfs2           47M   2% /cvmfs/grid.cern.ch
cvmfs2          675M  17% /cvmfs/sft.cern.ch
total           765M   5% -
boinc_shutdown called with exit code 0
sd_delay: 0

Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION   HOST                           PROXY
2.13.3.0  http://s1bnl-cvmfs.openhtc.io  DIRECT
******************************************************************
                        IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy:  DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION   HOST                           PROXY
2.13.3.0  http://s1bnl-cvmfs.openhtc.io  DIRECT
******************************************************************
                        IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy:  DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=9860
job: logsize=60 k
job: times=
0m0.003s 0m0.003s
21m48.532s 0m28.722s
job: cpuusage=1337
Job Finished
Filesystem      Used Use% Mounted on
cvmfs2          201K   1% /cvmfs/cvmfs-config.cern.ch
cvmfs2           43M   2% /cvmfs/alice.cern.ch
cvmfs2           62M   2% /cvmfs/grid.cern.ch
cvmfs2          675M  17% /cvmfs/sft.cern.ch
total           780M   5% -
boinc_shutdown called with exit code 0
sd_delay: 0

EOM
stderr from container:
Got a wpad.dat from lhchomeproxy.{cern.ch|fnal.gov}
Will use proxies from there for CVMFS and Frontier
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION   HOST                           PROXY
2.13.3.0  http://s1bnl-cvmfs.openhtc.io  DIRECT
******************************************************************
                        IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy:  DIRECT
===> [runRivet] Sun Feb 22 00:18:47 UTC 2026 [boinc pp z1j 7000 125 - pythia6 6.428 390 100000 677]

No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=9860
job: logsize=60 k
job: times=
0m0.000s 0m0.006s
19m25.724s 0m31.434s
job: cpuusage=1197
Job Finished
Filesystem      Used Use% Mounted on
cvmfs2          201K   1% /cvmfs/cvmfs-config.cern.ch
cvmfs2           22M   1% /cvmfs/alice.cern.ch
cvmfs2           17M   1% /cvmfs/grid.cern.ch
cvmfs2          675M  17% /cvmfs/sft.cern.ch
total           713M   5% -
boinc_shutdown called with exit code 0
sd_delay: 0

Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION   HOST                           PROXY
2.13.3.0  http://s1bnl-cvmfs.openhtc.io  DIRECT
******************************************************************
                        IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy:  DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=9860
job: logsize=60 k
job: times=
0m0.000s 0m0.005s
18m16.211s 0m28.169s
job: cpuusage=1124
===> [runRivet] Sun Feb 22 16:30:48 UTC 2026 [boinc pp z1j 7000 125 - pythia6 6.428 390 100000 677]

Job Finished
Filesystem      Used Use% Mounted on
cvmfs2          201K   1% /cvmfs/cvmfs-config.cern.ch
cvmfs2           43M   2% /cvmfs/alice.cern.ch
cvmfs2           32M   1% /cvmfs/grid.cern.ch
cvmfs2          675M  17% /cvmfs/sft.cern.ch
total           750M   5% -
boinc_shutdown called with exit code 0
sd_delay: 0

Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION   HOST                           PROXY
===> [runRivet] Sun Feb 22 19:49:58 UTC 2026 [boinc pp z1j 7000 125 - pythia6 6.428 390 100000 677]

===> [runRivet] Sun Feb 22 20:13:55 UTC 2026 [boinc pp z1j 7000 125 - pythia6 6.428 390 100000 677]

===> [runRivet] Sun Feb 22 21:54:32 UTC 2026 [boinc pp z1j 7000 125 - pythia6 6.428 390 100000 677]

===> [runRivet] Sun Feb 22 22:34:08 UTC 2026 [boinc pp z1j 7000 125 - pythia6 6.428 390 100000 677]

===> [runRivet] Sun Feb 22 23:40:47 UTC 2026 [boinc pp z1j 7000 125 - pythia6 6.428 390 100000 677]

2.13.3.0  http://s1bnl-cvmfs.openhtc.io  DIRECT
******************************************************************
                        IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy:  DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=9860
job: logsize=60 k
job: times=
0m0.000s 0m0.005s
19m42.970s 0m30.034s
job: cpuusage=1213
Job Finished
Filesystem      Used Use% Mounted on
cvmfs2          201K   1% /cvmfs/cvmfs-config.cern.ch
cvmfs2           43M   2% /cvmfs/alice.cern.ch
cvmfs2           47M   2% /cvmfs/grid.cern.ch
cvmfs2          675M  17% /cvmfs/sft.cern.ch
total           765M   5% -
boinc_shutdown called with exit code 0
sd_delay: 0

Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION   HOST                           PROXY
2.13.3.0  http://s1bnl-cvmfs.openhtc.io  DIRECT
******************************************************************
                        IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy:  DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=9860
job: logsize=60 k
job: times=
0m0.003s 0m0.003s
20m24.892s 0m29.351s
job: cpuusage=1254
Job Finished
Filesystem      Used Use% Mounted on
cvmfs2          201K   1% /cvmfs/cvmfs-config.cern.ch
cvmfs2           43M   2% /cvmfs/alice.cern.ch
cvmfs2           47M   2% /cvmfs/grid.cern.ch
cvmfs2          675M  17% /cvmfs/sft.cern.ch
total           765M   5% -
boinc_shutdown called with exit code 0
sd_delay: 0

Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION   HOST                           PROXY
2.13.3.0  http://s1bnl-cvmfs.openhtc.io  DIRECT
******************************************************************
                        IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy:  DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=9860
job: logsize=60 k
job: times=
0m0.000s 0m0.005s
19m12.806s 0m30.464s
job: cpuusage=1183
Job Finished
Filesystem      Used Use% Mounted on
cvmfs2          201K   1% /cvmfs/cvmfs-config.cern.ch
cvmfs2           43M   2% /cvmfs/alice.cern.ch
cvmfs2           47M   2% /cvmfs/grid.cern.ch
cvmfs2          675M  17% /cvmfs/sft.cern.ch
total           765M   5% -
boinc_shutdown called with exit code 0
sd_delay: 0

Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION   HOST                           PROXY
2.13.3.0  http://s1bnl-cvmfs.openhtc.io  DIRECT
******************************************************************
                        IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy:  DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=9860
job: logsize=60 k
job: times=
0m0.000s 0m0.007s
20m12.987s 0m31.586s
job: cpuusage=1245
Job Finished
Filesystem      Used Use% Mounted on
cvmfs2          201K   1% /cvmfs/cvmfs-config.cern.ch
cvmfs2           43M   2% /cvmfs/alice.cern.ch
cvmfs2           47M   2% /cvmfs/grid.cern.ch
cvmfs2          675M  17% /cvmfs/sft.cern.ch
total           765M   5% -
boinc_shutdown called with exit code 0
sd_delay: 0

Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION   HOST                           PROXY
2.13.3.0  http://s1bnl-cvmfs.openhtc.io  DIRECT
******************************************************************
                        IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy:  DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION   HOST                           PROXY
2.13.3.0  http://s1bnl-cvmfs.openhtc.io  DIRECT
******************************************************************
                        IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy:  DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=9860
job: logsize=60 k
job: times=
0m0.003s 0m0.003s
21m48.532s 0m28.722s
job: cpuusage=1337
Job Finished
Filesystem      Used Use% Mounted on
cvmfs2          201K   1% /cvmfs/cvmfs-config.cern.ch
cvmfs2           43M   2% /cvmfs/alice.cern.ch
cvmfs2           62M   2% /cvmfs/grid.cern.ch
cvmfs2          675M  17% /cvmfs/sft.cern.ch
total           780M   5% -
boinc_shutdown called with exit code 0
sd_delay: 0

EOM
stderr end
running docker command: container rm boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
program: podman
command output:
boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677_0
EOM
running docker command: image rm boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677
program: podman
command output:
Untagged: localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4896976-677:latest
EOM
2026-02-22 18:58:55 (31796): called boinc_finish(0)

</stderr_txt>
]]>


©2026 CERN