| Name | Theory_2922-4886261-677_0 |
| Workunit | 239303459 |
| Created | 21 Feb 2026, 13:21:56 UTC |
| Sent | 22 Feb 2026, 0:18:04 UTC |
| Report deadline | 5 Mar 2026, 0:18:04 UTC |
| Received | 23 Feb 2026, 4:18:42 UTC |
| Server state | Over |
| Outcome | Success |
| Client state | Done |
| Exit status | 0 (0x00000000) |
| Computer ID | 10864216 |
| Run time | 1 hours 25 min 56 sec |
| CPU time | 1 hours 17 min 0 sec |
| Priority | 0 |
| Validate state | Valid |
| Credit | 100.26 |
| Device peak FLOPS | 8.40 GFLOPS |
| Application version | Theory Simulation v302.10 (docker) windows_x86_64 |
| Peak working set size | 13.48 MB |
| Peak swap size | 2.70 MB |
| Peak disk usage | 2.12 MB |
<core_client_version>8.2.8</core_client_version>
<![CDATA[
<stderr_txt>
hcathome__theory_2922-4886261-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
296faef10826 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677:latest /bin/sh -c ./entr... 27 hours ago Up 18 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
18066.24% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
Error: "296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
296faef10826 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677:latest /bin/sh -c ./entr... 27 hours ago Up 19 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
17904.57% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
Error: "296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32" is not paused, can't unpause: container state improper
EOM
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32` failed: exit status 1
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
296faef10826 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677:latest /bin/sh -c ./entr... 27 hours ago Up 19 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
Error: "296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32" is not paused, can't unpause: container state improper
EOM
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32` failed: exit status 1
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
296faef10826 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677:latest /bin/sh -c ./entr... 27 hours ago Up 19 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
Error: "296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32" is not paused, can't unpause: container state improper
EOM
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
Error: "296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
296faef10826 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677:latest /bin/sh -c ./entr... 27 hours ago Up 19 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
17438.01% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
Error: "296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
296faef10826 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677:latest /bin/sh -c ./entr... 27 hours ago Up 19 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
17287.93% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
Error: "296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
296faef10826 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677:latest /bin/sh -c ./entr... 27 hours ago Up 20 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
17140.50% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
Error: "296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
296faef10826 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677:latest /bin/sh -c ./entr... 27 hours ago Up 20 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
16995.85% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
Error: "296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
296faef10826 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677:latest /bin/sh -c ./entr... 27 hours ago Up 20 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
16854.41% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
Error: "296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
296faef10826 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677:latest /bin/sh -c ./entr... 27 hours ago Up 20 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
16715.42% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
Error: "296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
296faef10826 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677:latest /bin/sh -c ./entr... 27 hours ago Up 20 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
16578.78% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
Error: "296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
296faef10826 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677:latest /bin/sh -c ./entr... 27 hours ago Up 20 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
16444.57% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
Error: "296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
296faef10826 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677:latest /bin/sh -c ./entr... 27 hours ago Up 21 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
16312.26% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
Error: "296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
296faef10826 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677:latest /bin/sh -c ./entr... 27 hours ago Up 21 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
16182.35% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
Error: "296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32" is not paused, can't unpause: container state improper
EOM
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32` failed: exit status 1
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
296faef10826 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677:latest /bin/sh -c ./entr... 27 hours ago Up 21 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
Error: "296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32" is not paused, can't unpause: container state improper
EOM
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32` failed: exit status 1
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
296faef10826 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677:latest /bin/sh -c ./entr... 27 hours ago Up 21 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
Error: "296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32" is not paused, can't unpause: container state improper
EOM
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
Error: "296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
296faef10826 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677:latest /bin/sh -c ./entr... 27 hours ago Up 21 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
15806.13% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
Error: "exited" is not running, can't pause: container state improper
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
Error: "296faef10826253cbf5e417b33a8427273d688d78692a507eeae7257a2335b32" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
296faef10826 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677:latest /bin/sh -c ./entr... 27 hours ago Exited (0) 7 seconds ago 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
EOM
running docker command: logs boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
Got a wpad.dat from lhchomeproxy.{cern.ch|fnal.gov}
Will use proxies from there for CVMFS and Frontier
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=16416
job: logsize=76 k
job: times=
0m0.003s 0m0.003s
23m5.029s 1m34.564s
job: cpuusage=1480
===> [runRivet] Sun Feb 22 00:20:00 UTC 2026 [boinc pp z1j 13000 160 - pythia6 6.428 391 100000 677]
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 22M 1% /cvmfs/alice.cern.ch
cvmfs2 17M 1% /cvmfs/grid.cern.ch
cvmfs2 676M 17% /cvmfs/sft.cern.ch
total 715M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=16412
job: logsize=76 k
job: times=
0m0.006s 0m0.000s
21m17.770s 1m26.830s
job: cpuusage=1365
===> [runRivet] Sun Feb 22 16:30:48 UTC 2026 [boinc pp z1j 13000 160 - pythia6 6.428 391 100000 677]
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 32M 1% /cvmfs/grid.cern.ch
cvmfs2 676M 17% /cvmfs/sft.cern.ch
total 751M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
===> [runRivet] Sun Feb 22 19:49:58 UTC 2026 [boinc pp z1j 13000 160 - pythia6 6.428 391 100000 677]
===> [runRivet] Sun Feb 22 20:13:54 UTC 2026 [boinc pp z1j 13000 160 - pythia6 6.428 391 100000 677]
===> [runRivet] Sun Feb 22 21:54:31 UTC 2026 [boinc pp z1j 13000 160 - pythia6 6.428 391 100000 677]
===> [runRivet] Sun Feb 22 22:34:08 UTC 2026 [boinc pp z1j 13000 160 - pythia6 6.428 391 100000 677]
===> [runRivet] Sun Feb 22 23:40:46 UTC 2026 [boinc pp z1j 13000 160 - pythia6 6.428 391 100000 677]
===> [runRivet] Mon Feb 23 00:20:31 UTC 2026 [boinc pp z1j 13000 160 - pythia6 6.428 391 100000 677]
===> [runRivet] Mon Feb 23 02:28:03 UTC 2026 [boinc pp z1j 13000 160 - pythia6 6.428 391 100000 677]
===> [runRivet] Mon Feb 23 03:01:38 UTC 2026 [boinc pp z1j 13000 160 - pythia6 6.428 391 100000 677]
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=16416
job: logsize=76 k
job: times=
0m0.000s 0m0.005s
22m27.317s 1m28.980s
job: cpuusage=1436
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 47M 2% /cvmfs/grid.cern.ch
cvmfs2 676M 17% /cvmfs/sft.cern.ch
total 766M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=16416
job: logsize=76 k
job: times=
0m0.001s 0m0.004s
23m20.662s 1m27.877s
job: cpuusage=1489
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 47M 2% /cvmfs/grid.cern.ch
cvmfs2 676M 17% /cvmfs/sft.cern.ch
total 766M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=16416
job: logsize=76 k
job: times=
0m0.000s 0m0.005s
22m12.745s 1m31.542s
job: cpuusage=1424
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 47M 2% /cvmfs/grid.cern.ch
cvmfs2 676M 17% /cvmfs/sft.cern.ch
total 766M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=16416
job: logsize=76 k
job: times=
0m0.006s 0m0.000s
23m34.759s 1m35.962s
job: cpuusage=1511
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 47M 2% /cvmfs/grid.cern.ch
cvmfs2 676M 17% /cvmfs/sft.cern.ch
total 766M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=16416
job: logsize=76 k
job: times=
0m0.000s 0m0.005s
25m7.878s 1m25.800s
job: cpuusage=1594
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 62M 2% /cvmfs/grid.cern.ch
cvmfs2 676M 17% /cvmfs/sft.cern.ch
total 781M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=16416
job: logsize=76 k
job: times=
0m0.000s 0m0.005s
22m19.224s 1m19.199s
job: cpuusage=1418
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 62M 2% /cvmfs/grid.cern.ch
cvmfs2 677M 17% /cvmfs/sft.cern.ch
total 781M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=16416
job: logsize=76 k
job: times=
0m0.006s 0m0.000s
21m57.720s 1m21.626s
job: cpuusage=1399
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 62M 2% /cvmfs/grid.cern.ch
cvmfs2 677M 17% /cvmfs/sft.cern.ch
total 781M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=16416
job: logsize=76 k
job: times=
0m0.005s 0m0.000s
23m39.351s 1m21.756s
job: cpuusage=1501
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 62M 2% /cvmfs/grid.cern.ch
cvmfs2 677M 17% /cvmfs/sft.cern.ch
total 781M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
EOM
stderr from container:
Got a wpad.dat from lhchomeproxy.{cern.ch|fnal.gov}
Will use proxies from there for CVMFS and Frontier
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=16416
job: logsize=76 k
job: times=
0m0.003s 0m0.003s
23m5.029s 1m34.564s
job: cpuusage=1480
===> [runRivet] Sun Feb 22 00:20:00 UTC 2026 [boinc pp z1j 13000 160 - pythia6 6.428 391 100000 677]
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 22M 1% /cvmfs/alice.cern.ch
cvmfs2 17M 1% /cvmfs/grid.cern.ch
cvmfs2 676M 17% /cvmfs/sft.cern.ch
total 715M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=16412
job: logsize=76 k
job: times=
0m0.006s 0m0.000s
21m17.770s 1m26.830s
job: cpuusage=1365
===> [runRivet] Sun Feb 22 16:30:48 UTC 2026 [boinc pp z1j 13000 160 - pythia6 6.428 391 100000 677]
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 32M 1% /cvmfs/grid.cern.ch
cvmfs2 676M 17% /cvmfs/sft.cern.ch
total 751M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
===> [runRivet] Sun Feb 22 19:49:58 UTC 2026 [boinc pp z1j 13000 160 - pythia6 6.428 391 100000 677]
===> [runRivet] Sun Feb 22 20:13:54 UTC 2026 [boinc pp z1j 13000 160 - pythia6 6.428 391 100000 677]
===> [runRivet] Sun Feb 22 21:54:31 UTC 2026 [boinc pp z1j 13000 160 - pythia6 6.428 391 100000 677]
===> [runRivet] Sun Feb 22 22:34:08 UTC 2026 [boinc pp z1j 13000 160 - pythia6 6.428 391 100000 677]
===> [runRivet] Sun Feb 22 23:40:46 UTC 2026 [boinc pp z1j 13000 160 - pythia6 6.428 391 100000 677]
===> [runRivet] Mon Feb 23 00:20:31 UTC 2026 [boinc pp z1j 13000 160 - pythia6 6.428 391 100000 677]
===> [runRivet] Mon Feb 23 02:28:03 UTC 2026 [boinc pp z1j 13000 160 - pythia6 6.428 391 100000 677]
===> [runRivet] Mon Feb 23 03:01:38 UTC 2026 [boinc pp z1j 13000 160 - pythia6 6.428 391 100000 677]
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=16416
job: logsize=76 k
job: times=
0m0.000s 0m0.005s
22m27.317s 1m28.980s
job: cpuusage=1436
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 47M 2% /cvmfs/grid.cern.ch
cvmfs2 676M 17% /cvmfs/sft.cern.ch
total 766M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=16416
job: logsize=76 k
job: times=
0m0.001s 0m0.004s
23m20.662s 1m27.877s
job: cpuusage=1489
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 47M 2% /cvmfs/grid.cern.ch
cvmfs2 676M 17% /cvmfs/sft.cern.ch
total 766M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=16416
job: logsize=76 k
job: times=
0m0.000s 0m0.005s
22m12.745s 1m31.542s
job: cpuusage=1424
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 47M 2% /cvmfs/grid.cern.ch
cvmfs2 676M 17% /cvmfs/sft.cern.ch
total 766M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=16416
job: logsize=76 k
job: times=
0m0.006s 0m0.000s
23m34.759s 1m35.962s
job: cpuusage=1511
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 47M 2% /cvmfs/grid.cern.ch
cvmfs2 676M 17% /cvmfs/sft.cern.ch
total 766M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=16416
job: logsize=76 k
job: times=
0m0.000s 0m0.005s
25m7.878s 1m25.800s
job: cpuusage=1594
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 62M 2% /cvmfs/grid.cern.ch
cvmfs2 676M 17% /cvmfs/sft.cern.ch
total 781M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=16416
job: logsize=76 k
job: times=
0m0.000s 0m0.005s
22m19.224s 1m19.199s
job: cpuusage=1418
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 62M 2% /cvmfs/grid.cern.ch
cvmfs2 677M 17% /cvmfs/sft.cern.ch
total 781M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=16416
job: logsize=76 k
job: times=
0m0.006s 0m0.000s
21m57.720s 1m21.626s
job: cpuusage=1399
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 62M 2% /cvmfs/grid.cern.ch
cvmfs2 677M 17% /cvmfs/sft.cern.ch
total 781M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=16416
job: logsize=76 k
job: times=
0m0.005s 0m0.000s
23m39.351s 1m21.756s
job: cpuusage=1501
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 62M 2% /cvmfs/grid.cern.ch
cvmfs2 677M 17% /cvmfs/sft.cern.ch
total 781M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
EOM
stderr end
running docker command: container rm boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
program: podman
command output:
boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677_0
EOM
running docker command: image rm boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677
program: podman
command output:
Untagged: localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4886261-677:latest
EOM
2026-02-22 22:23:04 (35544): called boinc_finish(0)
</stderr_txt>
]]>
©2026 CERN