| Name | Theory_2922-4843535-677_0 |
| Workunit | 239301123 |
| Created | 21 Feb 2026, 8:42:33 UTC |
| Sent | 21 Feb 2026, 22:46:38 UTC |
| Report deadline | 4 Mar 2026, 22:46:38 UTC |
| Received | 23 Feb 2026, 4:46:33 UTC |
| Server state | Over |
| Outcome | Success |
| Client state | Done |
| Exit status | 0 (0x00000000) |
| Computer ID | 10864216 |
| Run time | 2 hours 2 min 6 sec |
| CPU time | 1 hours 48 min 0 sec |
| Priority | 0 |
| Validate state | Valid |
| Credit | 142.45 |
| Device peak FLOPS | 8.40 GFLOPS |
| Application version | Theory Simulation v302.10 (docker) windows_x86_64 |
| Peak working set size | 13.46 MB |
| Peak swap size | 2.77 MB |
| Peak disk usage | 2.06 MB |
<core_client_version>8.2.8</core_client_version>
<![CDATA[
<stderr_txt>
localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677:latest /bin/sh -c ./entr... 28 hours ago Up 29 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
13407.80% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
Error: "1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1f0c17033912 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677:latest /bin/sh -c ./entr... 28 hours ago Up 29 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
13331.29% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
Error: "1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7" is not paused, can't unpause: container state improper
EOM
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7` failed: exit status 1
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1f0c17033912 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677:latest /bin/sh -c ./entr... 28 hours ago Up 29 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
Error: "1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7" is not paused, can't unpause: container state improper
EOM
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7` failed: exit status 1
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1f0c17033912 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677:latest /bin/sh -c ./entr... 28 hours ago Up 30 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
Error: "1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7" is not paused, can't unpause: container state improper
EOM
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
Error: "1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1f0c17033912 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677:latest /bin/sh -c ./entr... 28 hours ago Up 30 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
13107.44% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
Error: "1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1f0c17033912 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677:latest /bin/sh -c ./entr... 28 hours ago Up 30 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
13034.68% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
Error: "1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1f0c17033912 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677:latest /bin/sh -c ./entr... 28 hours ago Up 30 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
12962.64% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
Error: "1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1f0c17033912 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677:latest /bin/sh -c ./entr... 28 hours ago Up 30 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
12891.48% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
Error: "1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1f0c17033912 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677:latest /bin/sh -c ./entr... 28 hours ago Up 30 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
12821.16% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
Error: "1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1f0c17033912 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677:latest /bin/sh -c ./entr... 28 hours ago Up 31 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
12751.65% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
Error: "1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1f0c17033912 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677:latest /bin/sh -c ./entr... 28 hours ago Up 31 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
12682.90% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
Error: "1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1f0c17033912 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677:latest /bin/sh -c ./entr... 28 hours ago Up 31 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
12614.98% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
Error: "1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1f0c17033912 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677:latest /bin/sh -c ./entr... 28 hours ago Up 31 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
12547.88% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
Error: "1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7" is not paused, can't unpause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1f0c17033912 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677:latest /bin/sh -c ./entr... 28 hours ago Up 31 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
EOM
running docker command: stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
12481.48% 0B / 33.5GB
EOM
invalid usage stats; using defaults
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7` failed: exit status 1
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
Error: "1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7" is not paused, can't unpause: container state improper
EOM
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
cannot pause the container without a cgroup
Error: `/usr/bin/crun pause 1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7` failed: exit status 1
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1f0c17033912 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677:latest /bin/sh -c ./entr... 28 hours ago Up 32 minutes 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
EOM
running docker command: unpause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
Error: "1f0c17033912881ef5cffd7261c39632abf0021a8295bd03c2ea1795717496a7" is not paused, can't unpause: container state improper
EOM
running docker command: pause boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
Error: "exited" is not running, can't pause: container state improper
EOM
running docker command: ps --all -f "name=boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0"
program: podman
command output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1f0c17033912 localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677:latest /bin/sh -c ./entr... 28 hours ago Exited (0) 9 seconds ago 80/tcp boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
EOM
running docker command: logs boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
Got a wpad.dat from lhchomeproxy.{cern.ch|fnal.gov}
Will use proxies from there for CVMFS and Frontier
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=10212
job: logsize=76 k
job: times=
0m0.000s 0m0.006s
32m22.513s 0m52.634s
job: cpuusage=1995
===> [runRivet] Sun Feb 22 00:18:46 UTC 2026 [boinc pp jets 7000 80,-,1060 - pythia8 8.315 tune-4cx 100000 677]
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 22M 1% /cvmfs/alice.cern.ch
cvmfs2 17M 1% /cvmfs/grid.cern.ch
cvmfs2 711M 18% /cvmfs/sft.cern.ch
total 750M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=10212
job: logsize=76 k
job: times=
0m0.000s 0m0.005s
30m14.532s 0m43.824s
===> [runRivet] Sun Feb 22 16:30:47 UTC 2026 [boinc pp jets 7000 80,-,1060 - pythia8 8.315 tune-4cx 100000 677]
job: cpuusage=1858
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 32M 1% /cvmfs/grid.cern.ch
cvmfs2 711M 18% /cvmfs/sft.cern.ch
total 786M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
===> [runRivet] Sun Feb 22 20:13:56 UTC 2026 [boinc pp jets 7000 80,-,1060 - pythia8 8.315 tune-4cx 100000 677]
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=10212
job: logsize=76 k
job: times=
0m0.006s 0m0.000s
32m6.258s 0m45.796s
job: cpuusage=1972
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 47M 2% /cvmfs/grid.cern.ch
cvmfs2 711M 18% /cvmfs/sft.cern.ch
total 801M 6% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=10212
job: logsize=76 k
job: times=
0m0.000s 0m0.005s
30m50.219s 0m47.431s
job: cpuusage=1898
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 47M 2% /cvmfs/grid.cern.ch
cvmfs2 711M 18% /cvmfs/sft.cern.ch
total 801M 6% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not f
===> [runRivet] Sun Feb 22 21:54:32 UTC 2026 [boinc pp jets 7000 80,-,1060 - pythia8 8.315 tune-4cx 100000 677]
===> [runRivet] Sun Feb 22 22:34:05 UTC 2026 [boinc pp jets 7000 80,-,1060 - pythia8 8.315 tune-4cx 100000 677]
===> [runRivet] Sun Feb 22 23:40:45 UTC 2026 [boinc pp jets 7000 80,-,1060 - pythia8 8.315 tune-4cx 100000 677]
===> [runRivet] Mon Feb 23 00:20:29 UTC 2026 [boinc pp jets 7000 80,-,1060 - pythia8 8.315 tune-4cx 100000 677]
===> [runRivet] Mon Feb 23 02:28:03 UTC 2026 [boinc pp jets 7000 80,-,1060 - pythia8 8.315 tune-4cx 100000 677]
===> [runRivet] Mon Feb 23 03:01:38 UTC 2026 [boinc pp jets 7000 80,-,1060 - pythia8 8.315 tune-4cx 100000 677]
===> [runRivet] Mon Feb 23 03:51:55 UTC 2026 [boinc pp jets 7000 80,-,1060 - pythia8 8.315 tune-4cx 100000 677]
ind a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=10212
job: logsize=76 k
job: times=
0m0.003s 0m0.002s
33m36.729s 0m49.098s
job: cpuusage=2066
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 47M 2% /cvmfs/grid.cern.ch
cvmfs2 711M 18% /cvmfs/sft.cern.ch
total 801M 6% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=10212
job: logsize=76 k
job: times=
0m0.003s 0m0.003s
34m6.961s 0m44.097s
job: cpuusage=2091
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 62M 2% /cvmfs/grid.cern.ch
cvmfs2 711M 18% /cvmfs/sft.cern.ch
total 816M 6% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=10212
job: logsize=76 k
job: times=
0m0.003s 0m0.003s
31m32.687s 0m41.820s
job: cpuusage=1935
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 62M 2% /cvmfs/grid.cern.ch
cvmfs2 712M 18% /cvmfs/sft.cern.ch
total 816M 6% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=10212
job: logsize=76 k
job: times=
0m0.000s 0m0.005s
30m46.760s 0m40.728s
job: cpuusage=1887
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 62M 2% /cvmfs/grid.cern.ch
cvmfs2 712M 18% /cvmfs/sft.cern.ch
total 816M 6% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=10212
job: logsize=76 k
job: times=
0m0.000s 0m0.005s
35m20.135s 0m45.240s
job: cpuusage=2165
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 62M 2% /cvmfs/grid.cern.ch
cvmfs2 712M 18% /cvmfs/sft.cern.ch
total 816M 6% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=10212
job: logsize=76 k
job: times=
0m0.006s 0m0.000s
35m28.572s 0m45.865s
job: cpuusage=2174
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 62M 2% /cvmfs/grid.cern.ch
cvmfs2 712M 18% /cvmfs/sft.cern.ch
total 816M 6% -
boinc_shutdown called with exit code 0
sd_delay: 0
EOM
stderr from container:
Got a wpad.dat from lhchomeproxy.{cern.ch|fnal.gov}
Will use proxies from there for CVMFS and Frontier
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=10212
job: logsize=76 k
job: times=
0m0.000s 0m0.006s
32m22.513s 0m52.634s
job: cpuusage=1995
===> [runRivet] Sun Feb 22 00:18:46 UTC 2026 [boinc pp jets 7000 80,-,1060 - pythia8 8.315 tune-4cx 100000 677]
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 22M 1% /cvmfs/alice.cern.ch
cvmfs2 17M 1% /cvmfs/grid.cern.ch
cvmfs2 711M 18% /cvmfs/sft.cern.ch
total 750M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=10212
job: logsize=76 k
job: times=
0m0.000s 0m0.005s
30m14.532s 0m43.824s
===> [runRivet] Sun Feb 22 16:30:47 UTC 2026 [boinc pp jets 7000 80,-,1060 - pythia8 8.315 tune-4cx 100000 677]
job: cpuusage=1858
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 32M 1% /cvmfs/grid.cern.ch
cvmfs2 711M 18% /cvmfs/sft.cern.ch
total 786M 5% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
===> [runRivet] Sun Feb 22 20:13:56 UTC 2026 [boinc pp jets 7000 80,-,1060 - pythia8 8.315 tune-4cx 100000 677]
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=10212
job: logsize=76 k
job: times=
0m0.006s 0m0.000s
32m6.258s 0m45.796s
job: cpuusage=1972
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 47M 2% /cvmfs/grid.cern.ch
cvmfs2 711M 18% /cvmfs/sft.cern.ch
total 801M 6% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=10212
job: logsize=76 k
job: times=
0m0.000s 0m0.005s
30m50.219s 0m47.431s
job: cpuusage=1898
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 47M 2% /cvmfs/grid.cern.ch
cvmfs2 711M 18% /cvmfs/sft.cern.ch
total 801M 6% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not f
===> [runRivet] Sun Feb 22 21:54:32 UTC 2026 [boinc pp jets 7000 80,-,1060 - pythia8 8.315 tune-4cx 100000 677]
===> [runRivet] Sun Feb 22 22:34:05 UTC 2026 [boinc pp jets 7000 80,-,1060 - pythia8 8.315 tune-4cx 100000 677]
===> [runRivet] Sun Feb 22 23:40:45 UTC 2026 [boinc pp jets 7000 80,-,1060 - pythia8 8.315 tune-4cx 100000 677]
===> [runRivet] Mon Feb 23 00:20:29 UTC 2026 [boinc pp jets 7000 80,-,1060 - pythia8 8.315 tune-4cx 100000 677]
===> [runRivet] Mon Feb 23 02:28:03 UTC 2026 [boinc pp jets 7000 80,-,1060 - pythia8 8.315 tune-4cx 100000 677]
===> [runRivet] Mon Feb 23 03:01:38 UTC 2026 [boinc pp jets 7000 80,-,1060 - pythia8 8.315 tune-4cx 100000 677]
===> [runRivet] Mon Feb 23 03:51:55 UTC 2026 [boinc pp jets 7000 80,-,1060 - pythia8 8.315 tune-4cx 100000 677]
ind a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=10212
job: logsize=76 k
job: times=
0m0.003s 0m0.002s
33m36.729s 0m49.098s
job: cpuusage=2066
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 47M 2% /cvmfs/grid.cern.ch
cvmfs2 711M 18% /cvmfs/sft.cern.ch
total 801M 6% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=10212
job: logsize=76 k
job: times=
0m0.003s 0m0.003s
34m6.961s 0m44.097s
job: cpuusage=2091
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 62M 2% /cvmfs/grid.cern.ch
cvmfs2 711M 18% /cvmfs/sft.cern.ch
total 816M 6% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=10212
job: logsize=76 k
job: times=
0m0.003s 0m0.003s
31m32.687s 0m41.820s
job: cpuusage=1935
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 62M 2% /cvmfs/grid.cern.ch
cvmfs2 712M 18% /cvmfs/sft.cern.ch
total 816M 6% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=10212
job: logsize=76 k
job: times=
0m0.000s 0m0.005s
30m46.760s 0m40.728s
job: cpuusage=1887
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 62M 2% /cvmfs/grid.cern.ch
cvmfs2 712M 18% /cvmfs/sft.cern.ch
total 816M 6% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=10212
job: logsize=76 k
job: times=
0m0.000s 0m0.005s
35m20.135s 0m45.240s
job: cpuusage=2165
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 62M 2% /cvmfs/grid.cern.ch
cvmfs2 712M 18% /cvmfs/sft.cern.ch
total 816M 6% -
boinc_shutdown called with exit code 0
sd_delay: 0
Could not find a local HTTP proxy
CVMFS and Frontier will have to use DIRECT connections
This makes the application less efficient
It also puts higher load on the project servers
Setting up a local HTTP proxy is highly recommended
Advice can be found in the project forum
Using custom CVMFS.
Probing CVMFS repositories ...
Probing /cvmfs/alice.cern.ch... OK
Probing /cvmfs/cvmfs-config.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/sft.cern.ch... OK
Excerpt from "cvmfs_config stat":
VERSION HOST PROXY
2.13.3.0 http://s1bnl-cvmfs.openhtc.io DIRECT
******************************************************************
IMPORTANT HINT(S)!
******************************************************************
CVMFS server: http://s1bnl-cvmfs.openhtc.io
CVMFS proxy: DIRECT
No local HTTP proxy found.
With this setup concurrently running containers can't share
a common CVMFS cache. A local HTTP proxy is therefore
highly recommended.
More info how to configure a local HTTP proxy:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
******************************************************************
Environment HTTP proxy: not set
job: htmld=/var/www/lighttpd
job: unpack exitcode=0
job: run exitcode=0
job: diskusage=10212
job: logsize=76 k
job: times=
0m0.006s 0m0.000s
35m28.572s 0m45.865s
job: cpuusage=2174
Job Finished
Filesystem Used Use% Mounted on
cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.ch
cvmfs2 43M 2% /cvmfs/alice.cern.ch
cvmfs2 62M 2% /cvmfs/grid.cern.ch
cvmfs2 712M 18% /cvmfs/sft.cern.ch
total 816M 6% -
boinc_shutdown called with exit code 0
sd_delay: 0
EOM
stderr end
running docker command: container rm boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
program: podman
command output:
boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677_0
EOM
running docker command: image rm boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677
program: podman
command output:
Untagged: localhost/boinc__lhcathome.cern.ch_lhcathome__theory_2922-4843535-677:latest
EOM
2026-02-22 23:23:28 (29752): called boinc_finish(0)
</stderr_txt>
]]>
©2026 CERN