| Name | xzcMDmEd738n9Rq4apOajLDm4fhM0noT9bVo0NGKDmb7zKDmPAaIXm_2 |
| Workunit | 238692121 |
| Created | 26 Jan 2026, 15:06:57 UTC |
| Sent | 26 Jan 2026, 15:22:29 UTC |
| Report deadline | 3 Feb 2026, 15:22:29 UTC |
| Received | 27 Jan 2026, 19:48:30 UTC |
| Server state | Over |
| Outcome | Success |
| Client state | Done |
| Exit status | 0 (0x00000000) |
| Computer ID | 10952321 |
| Run time | 2 hours 59 min 40 sec |
| CPU time | 8 hours 0 min 58 sec |
| Priority | 28 |
| Validate state | Valid |
| Credit | 1,179.67 |
| Device peak FLOPS | 33.55 GFLOPS |
| Application version | ATLAS Simulation v3.01 (native_mt) x86_64-pc-linux-gnu |
| Peak working set size | 2.46 GB |
| Peak swap size | 10.19 GB |
| Peak disk usage | 1.56 GB |
<core_client_version>7.24.1</core_client_version>
<![CDATA[
<stderr_txt>
10:55:31 (1068174): wrapper (7.7.26015): starting
10:55:31 (1068174): wrapper: running run_atlas (--nthreads 3)
[2026-01-27 10:55:31] Arguments: --nthreads 3
[2026-01-27 10:55:31] Threads: 3
[2026-01-27 10:55:31] Checking for CVMFS
[2026-01-27 10:55:31] Probing /cvmfs/atlas.cern.ch... OK
[2026-01-27 10:55:31] Probing /cvmfs/atlas-condb.cern.ch... OK
[2026-01-27 10:55:31] Running cvmfs_config stat atlas.cern.ch
[2026-01-27 10:55:32] VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE
[2026-01-27 10:55:32] 2.13.3.0 8062 990 50356 155589 3 188 10033623 10240000 2748 130560 0 17285031 100.000 34949 487 http://s1fnal-cvmfs.openhtc.io:8080/cvmfs/atlas.cern.ch DIRECT 1
[2026-01-27 10:55:32] CVMFS is ok
[2026-01-27 10:55:32] Efficiency of ATLAS tasks can be improved by the following measure(s):
[2026-01-27 10:55:32] Small home clusters do not require a local http proxy but it is suggested if
[2026-01-27 10:55:32] more than 10 cores throughout the same LAN segment are regularly running ATLAS like tasks.
[2026-01-27 10:55:32] Further information can be found at the LHC@home message board.
[2026-01-27 10:55:32] Using apptainer image /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7
[2026-01-27 10:55:32] Checking for apptainer binary...
[2026-01-27 10:55:32] apptainer is not installed, using version from CVMFS
[2026-01-27 10:55:32] Checking apptainer works with /cvmfs/atlas.cern.ch/repo/containers/sw/apptainer/x86_64-el7/current/bin/apptainer exec -B /cvmfs /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 hostname
[2026-01-27 10:55:32] pots
[2026-01-27 10:55:32] apptainer works
[2026-01-27 10:55:32] Set ATHENA_PROC_NUMBER=3
[2026-01-27 10:55:32] Set ATHENA_CORE_NUMBER=3
[2026-01-27 10:55:32] Starting ATLAS job with PandaID=6982529453
[2026-01-27 10:55:32] Running command: /cvmfs/atlas.cern.ch/repo/containers/sw/apptainer/x86_64-el7/current/bin/apptainer exec -B /cvmfs,/var/lib/boinc-client/slots/0 /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 sh start_atlas.sh
11:08:32 (1083031): wrapper (7.7.26015): starting
11:08:32 (1083031): wrapper: running run_atlas (--nthreads 3)
[2026-01-27 11:08:32] Arguments: --nthreads 3
[2026-01-27 11:08:32] Threads: 3
[2026-01-27 11:08:32] This job has been restarted, cleaning up previous attempt
[2026-01-27 11:08:32] Checking for CVMFS
[2026-01-27 11:08:32] Probing /cvmfs/atlas.cern.ch... OK
11:12:25 (1094803): wrapper (7.7.26015): starting
11:12:25 (1094803): wrapper: running run_atlas (--nthreads 3)
[2026-01-27 11:12:25] Arguments: --nthreads 3
[2026-01-27 11:12:25] Threads: 3
[2026-01-27 11:12:25] This job has been restarted, cleaning up previous attempt
[2026-01-27 11:12:25] Checking for CVMFS
[2026-01-27 11:12:25] Probing /cvmfs/atlas.cern.ch... OK
[2026-01-27 11:12:25] Probing /cvmfs/atlas-condb.cern.ch... OK
[2026-01-27 11:12:25] Running cvmfs_config stat atlas.cern.ch
[2026-01-27 11:12:25] VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE
[2026-01-27 11:12:25] 2.13.3.0 8062 1007 49644 155590 3 184 10043762 10240000 856 130560 0 17583967 99.999 37730 509 http://s1fnal-cvmfs.openhtc.io:8080/cvmfs/atlas.cern.ch DIRECT 1
[2026-01-27 11:12:25] CVMFS is ok
[2026-01-27 11:12:25] Efficiency of ATLAS tasks can be improved by the following measure(s):
[2026-01-27 11:12:25] Small home clusters do not require a local http proxy but it is suggested if
[2026-01-27 11:12:25] more than 10 cores throughout the same LAN segment are regularly running ATLAS like tasks.
[2026-01-27 11:12:25] Further information can be found at the LHC@home message board.
[2026-01-27 11:12:25] Using apptainer image /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7
[2026-01-27 11:12:25] Checking for apptainer binary...
[2026-01-27 11:12:25] apptainer is not installed, using version from CVMFS
[2026-01-27 11:12:25] Checking apptainer works with /cvmfs/atlas.cern.ch/repo/containers/sw/apptainer/x86_64-el7/current/bin/apptainer exec -B /cvmfs /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 hostname
[2026-01-27 11:12:25] pots
[2026-01-27 11:12:25] apptainer works
[2026-01-27 11:12:25] Set ATHENA_PROC_NUMBER=3
[2026-01-27 11:12:25] Set ATHENA_CORE_NUMBER=3
[2026-01-27 11:12:25] Starting ATLAS job with PandaID=6982529453
[2026-01-27 11:12:25] Running command: /cvmfs/atlas.cern.ch/repo/containers/sw/apptainer/x86_64-el7/current/bin/apptainer exec -B /cvmfs,/var/lib/boinc-client/slots/0 /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 sh start_atlas.sh
11:16:34 (1127100): wrapper (7.7.26015): starting
11:16:34 (1127100): wrapper: running run_atlas (--nthreads 3)
[2026-01-27 11:16:34] Arguments: --nthreads 3
[2026-01-27 11:16:34] Threads: 3
[2026-01-27 11:16:34] This job has been restarted, cleaning up previous attempt
[2026-01-27 11:16:34] Checking for CVMFS
[2026-01-27 11:16:34] Probing /cvmfs/atlas.cern.ch... OK
[2026-01-27 11:16:34] Probing /cvmfs/atlas-condb.cern.ch... OK
[2026-01-27 11:16:34] Running cvmfs_config stat atlas.cern.ch
[2026-01-27 11:16:34] VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE
[2026-01-27 11:16:34] 2.13.3.0 8062 1011 49020 155591 2 11 10046761 10240000 920 130560 0 17702327 99.999 39068 525 http://s1fnal-cvmfs.openhtc.io:8080/cvmfs/atlas.cern.ch DIRECT 1
[2026-01-27 11:16:34] CVMFS is ok
[2026-01-27 11:16:34] Efficiency of ATLAS tasks can be improved by the following measure(s):
[2026-01-27 11:16:34] Small home clusters do not require a local http proxy but it is suggested if
[2026-01-27 11:16:34] more than 10 cores throughout the same LAN segment are regularly running ATLAS like tasks.
[2026-01-27 11:16:34] Further information can be found at the LHC@home message board.
[2026-01-27 11:16:34] Using apptainer image /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7
[2026-01-27 11:16:34] Checking for apptainer binary...
[2026-01-27 11:16:34] apptainer is not installed, using version from CVMFS
[2026-01-27 11:16:34] Checking apptainer works with /cvmfs/atlas.cern.ch/repo/containers/sw/apptainer/x86_64-el7/current/bin/apptainer exec -B /cvmfs /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 hostname
[2026-01-27 11:16:34] pots
[2026-01-27 11:16:34] apptainer works
[2026-01-27 11:16:34] Set ATHENA_PROC_NUMBER=3
[2026-01-27 11:16:34] Set ATHENA_CORE_NUMBER=3
[2026-01-27 11:16:34] Starting ATLAS job with PandaID=6982529453
[2026-01-27 11:16:34] Running command: /cvmfs/atlas.cern.ch/repo/containers/sw/apptainer/x86_64-el7/current/bin/apptainer exec -B /cvmfs,/var/lib/boinc-client/slots/0 /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 sh start_atlas.sh
11:53:39 (1165245): wrapper (7.7.26015): starting
11:53:39 (1165245): wrapper: running run_atlas (--nthreads 3)
[2026-01-27 11:53:39] Arguments: --nthreads 3
[2026-01-27 11:53:39] Threads: 3
[2026-01-27 11:53:39] This job has been restarted, cleaning up previous attempt
[2026-01-27 11:53:39] Checking for CVMFS
[2026-01-27 11:53:39] Probing /cvmfs/atlas.cern.ch... OK
[2026-01-27 11:54:21] Probing /cvmfs/atlas-condb.cern.ch... OK
[2026-01-27 11:54:21] Running cvmfs_config stat atlas.cern.ch
[2026-01-27 11:54:21] VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE
[2026-01-27 11:54:21] 2.13.3.0 8062 1049 49168 155592 1 11 10055300 10240001 916 130560 0 18385531 99.999 41589 544 http://s1fnal-cvmfs.openhtc.io:8080/cvmfs/atlas.cern.ch DIRECT 1
[2026-01-27 11:54:21] CVMFS is ok
[2026-01-27 11:54:21] Efficiency of ATLAS tasks can be improved by the following measure(s):
[2026-01-27 11:54:21] Small home clusters do not require a local http proxy but it is suggested if
[2026-01-27 11:54:21] more than 10 cores throughout the same LAN segment are regularly running ATLAS like tasks.
[2026-01-27 11:54:21] Further information can be found at the LHC@home message board.
[2026-01-27 11:54:21] Using apptainer image /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7
[2026-01-27 11:54:21] Checking for apptainer binary...
[2026-01-27 11:54:21] apptainer is not installed, using version from CVMFS
[2026-01-27 11:54:21] Checking apptainer works with /cvmfs/atlas.cern.ch/repo/containers/sw/apptainer/x86_64-el7/current/bin/apptainer exec -B /cvmfs /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 hostname
[2026-01-27 11:54:21] pots
[2026-01-27 11:54:21] apptainer works
[2026-01-27 11:54:21] Set ATHENA_PROC_NUMBER=3
[2026-01-27 11:54:21] Set ATHENA_CORE_NUMBER=3
[2026-01-27 11:54:21] Starting ATLAS job with PandaID=6982529453
[2026-01-27 11:54:21] Running command: /cvmfs/atlas.cern.ch/repo/containers/sw/apptainer/x86_64-el7/current/bin/apptainer exec -B /cvmfs,/var/lib/boinc-client/slots/0 /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 sh start_atlas.sh
12:12:08 (1203226): wrapper (7.7.26015): starting
12:12:08 (1203226): wrapper: running run_atlas (--nthreads 3)
[2026-01-27 12:12:08] Arguments: --nthreads 3
[2026-01-27 12:12:08] Threads: 3
[2026-01-27 12:12:08] This job has been restarted, cleaning up previous attempt
[2026-01-27 12:12:08] Checking for CVMFS
[2026-01-27 12:12:08] Probing /cvmfs/atlas.cern.ch... OK
[2026-01-27 12:12:09] Probing /cvmfs/atlas-condb.cern.ch... OK
[2026-01-27 12:12:09] Running cvmfs_config stat atlas.cern.ch
[2026-01-27 12:12:09] VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE
[2026-01-27 12:12:09] 2.13.3.0 8062 1067 48564 155593 3 9 10061122 10240000 92 130560 0 18614432 99.999 43298 550 http://s1fnal-cvmfs.openhtc.io:8080/cvmfs/atlas.cern.ch DIRECT 1
[2026-01-27 12:12:09] CVMFS is ok
[2026-01-27 12:12:09] Efficiency of ATLAS tasks can be improved by the following measure(s):
[2026-01-27 12:12:09] Small home clusters do not require a local http proxy but it is suggested if
[2026-01-27 12:12:09] more than 10 cores throughout the same LAN segment are regularly running ATLAS like tasks.
[2026-01-27 12:12:09] Further information can be found at the LHC@home message board.
[2026-01-27 12:12:09] Using apptainer image /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7
[2026-01-27 12:12:09] Checking for apptainer binary...
[2026-01-27 12:12:09] apptainer is not installed, using version from CVMFS
[2026-01-27 12:12:09] Checking apptainer works with /cvmfs/atlas.cern.ch/repo/containers/sw/apptainer/x86_64-el7/current/bin/apptainer exec -B /cvmfs /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 hostname
[2026-01-27 12:12:09] pots
[2026-01-27 12:12:09] apptainer works
[2026-01-27 12:12:09] Set ATHENA_PROC_NUMBER=3
[2026-01-27 12:12:09] Set ATHENA_CORE_NUMBER=3
[2026-01-27 12:12:09] Starting ATLAS job with PandaID=6982529453
[2026-01-27 12:12:09] Running command: /cvmfs/atlas.cern.ch/repo/containers/sw/apptainer/x86_64-el7/current/bin/apptainer exec -B /cvmfs,/var/lib/boinc-client/slots/0 /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 sh start_atlas.sh
12:59:39 (1256317): wrapper (7.7.26015): starting
12:59:39 (1256317): wrapper: running run_atlas (--nthreads 3)
[2026-01-27 12:59:39] Arguments: --nthreads 3
[2026-01-27 12:59:39] Threads: 3
[2026-01-27 12:59:39] This job has been restarted, cleaning up previous attempt
[2026-01-27 12:59:39] Checking for CVMFS
[2026-01-27 12:59:39] Probing /cvmfs/atlas.cern.ch... OK
[2026-01-27 12:59:41] Probing /cvmfs/atlas-condb.cern.ch... OK
[2026-01-27 12:59:41] Running cvmfs_config stat atlas.cern.ch
[2026-01-27 12:59:41] VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE
[2026-01-27 12:59:41] 2.13.3.0 8062 1114 48660 155594 3 11 10071010 10240000 916 130560 0 19792208 99.999 44644 552 http://s1fnal-cvmfs.openhtc.io:8080/cvmfs/atlas.cern.ch DIRECT 1
[2026-01-27 12:59:41] CVMFS is ok
[2026-01-27 12:59:41] Efficiency of ATLAS tasks can be improved by the following measure(s):
[2026-01-27 12:59:41] Small home clusters do not require a local http proxy but it is suggested if
[2026-01-27 12:59:41] more than 10 cores throughout the same LAN segment are regularly running ATLAS like tasks.
[2026-01-27 12:59:41] Further information can be found at the LHC@home message board.
[2026-01-27 12:59:41] Using apptainer image /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7
[2026-01-27 12:59:41] Checking for apptainer binary...
[2026-01-27 12:59:41] apptainer is not installed, using version from CVMFS
[2026-01-27 12:59:41] Checking apptainer works with /cvmfs/atlas.cern.ch/repo/containers/sw/apptainer/x86_64-el7/current/bin/apptainer exec -B /cvmfs /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 hostname
[2026-01-27 12:59:42] pots
[2026-01-27 12:59:42] apptainer works
[2026-01-27 12:59:42] Set ATHENA_PROC_NUMBER=3
[2026-01-27 12:59:42] Set ATHENA_CORE_NUMBER=3
[2026-01-27 12:59:42] Starting ATLAS job with PandaID=6982529453
[2026-01-27 12:59:42] Running command: /cvmfs/atlas.cern.ch/repo/containers/sw/apptainer/x86_64-el7/current/bin/apptainer exec -B /cvmfs,/var/lib/boinc-client/slots/0 /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 sh start_atlas.sh
13:13:52 (1290938): wrapper (7.7.26015): starting
13:13:52 (1290938): wrapper: running run_atlas (--nthreads 3)
[2026-01-27 13:13:52] Arguments: --nthreads 3
[2026-01-27 13:13:52] Threads: 3
[2026-01-27 13:13:52] This job has been restarted, cleaning up previous attempt
[2026-01-27 13:13:52] Checking for CVMFS
[2026-01-27 13:13:52] Probing /cvmfs/atlas.cern.ch... OK
[2026-01-27 13:13:53] Probing /cvmfs/atlas-condb.cern.ch... OK
[2026-01-27 13:13:53] Running cvmfs_config stat atlas.cern.ch
[2026-01-27 13:13:53] VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE
[2026-01-27 13:13:53] 2.13.3.0 8062 1128 48920 155594 1 186 10072737 10240000 916 130560 0 20131791 99.999 44715 547 http://s1fnal-cvmfs.openhtc.io:8080/cvmfs/atlas.cern.ch DIRECT 1
[2026-01-27 13:13:53] CVMFS is ok
[2026-01-27 13:13:53] Efficiency of ATLAS tasks can be improved by the following measure(s):
[2026-01-27 13:13:53] Small home clusters do not require a local http proxy but it is suggested if
[2026-01-27 13:13:53] more than 10 cores throughout the same LAN segment are regularly running ATLAS like tasks.
[2026-01-27 13:13:53] Further information can be found at the LHC@home message board.
[2026-01-27 13:13:53] Using apptainer image /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7
[2026-01-27 13:13:53] Checking for apptainer binary...
[2026-01-27 13:13:53] apptainer is not installed, using version from CVMFS
[2026-01-27 13:13:53] Checking apptainer works with /cvmfs/atlas.cern.ch/repo/containers/sw/apptainer/x86_64-el7/current/bin/apptainer exec -B /cvmfs /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 hostname
[2026-01-27 13:13:53] pots
[2026-01-27 13:13:53] apptainer works
[2026-01-27 13:13:53] Set ATHENA_PROC_NUMBER=3
[2026-01-27 13:13:53] Set ATHENA_CORE_NUMBER=3
[2026-01-27 13:13:53] Starting ATLAS job with PandaID=6982529453
[2026-01-27 13:13:53] Running command: /cvmfs/atlas.cern.ch/repo/containers/sw/apptainer/x86_64-el7/current/bin/apptainer exec -B /cvmfs,/var/lib/boinc-client/slots/0 /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 sh start_atlas.sh
13:48:24 (1341696): wrapper (7.7.26015): starting
13:48:24 (1341696): wrapper: running run_atlas (--nthreads 3)
[2026-01-27 13:48:24] Arguments: --nthreads 3
[2026-01-27 13:48:24] Threads: 3
[2026-01-27 13:48:24] This job has been restarted, cleaning up previous attempt
[2026-01-27 13:48:24] Checking for CVMFS
[2026-01-27 13:48:25] Probing /cvmfs/atlas.cern.ch... OK
[2026-01-27 13:48:25] Probing /cvmfs/atlas-condb.cern.ch... OK
[2026-01-27 13:48:25] Running cvmfs_config stat atlas.cern.ch
[2026-01-27 13:48:26] VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE
[2026-01-27 13:48:26] 2.13.3.0 8062 1163 48848 155595 3 188 10076718 10240000 916 130560 0 20996940 99.999 46126 548 http://s1fnal-cvmfs.openhtc.io:8080/cvmfs/atlas.cern.ch DIRECT 1
[2026-01-27 13:48:26] CVMFS is ok
[2026-01-27 13:48:26] Efficiency of ATLAS tasks can be improved by the following measure(s):
[2026-01-27 13:48:26] Small home clusters do not require a local http proxy but it is suggested if
[2026-01-27 13:48:26] more than 10 cores throughout the same LAN segment are regularly running ATLAS like tasks.
[2026-01-27 13:48:26] Further information can be found at the LHC@home message board.
[2026-01-27 13:48:26] Using apptainer image /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7
[2026-01-27 13:48:26] Checking for apptainer binary...
[2026-01-27 13:48:26] apptainer is not installed, using version from CVMFS
[2026-01-27 13:48:26] Checking apptainer works with /cvmfs/atlas.cern.ch/repo/containers/sw/apptainer/x86_64-el7/current/bin/apptainer exec -B /cvmfs /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 hostname
[2026-01-27 13:48:26] pots
[2026-01-27 13:48:26] apptainer works
[2026-01-27 13:48:26] Set ATHENA_PROC_NUMBER=3
[2026-01-27 13:48:26] Set ATHENA_CORE_NUMBER=3
[2026-01-27 13:48:26] Starting ATLAS job with PandaID=6982529453
[2026-01-27 13:48:26] Running command: /cvmfs/atlas.cern.ch/repo/containers/sw/apptainer/x86_64-el7/current/bin/apptainer exec -B /cvmfs,/var/lib/boinc-client/slots/0 /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 sh start_atlas.sh
[2026-01-27 14:46:57] *** The last 200 lines of the pilot log: ***
[2026-01-27 14:46:57] protocol_id=None
[2026-01-27 14:46:57] protocols=[{'endpoint': 'davs://dav.ndgf.org:443', 'flavour': 'WEBDAV', 'id': 331, 'path': '/atlas/disk/atlasdatadisk/rucio/'}]
[2026-01-27 14:46:57] replicas=None
[2026-01-27 14:46:57] scope=mc23_13p6TeV
[2026-01-27 14:46:57] status=None
[2026-01-27 14:46:57] status_code=0
[2026-01-27 14:46:57] storage_token=
[2026-01-27 14:46:57] surl=/var/lib/boinc-client/slots/0/PanDA_Pilot-6982529453/log.48310826._009107.job.log.tgz.1
[2026-01-27 14:46:57] turl=davs://dav.ndgf.org:443/atlas/disk/atlasdatadisk/rucio/mc23_13p6TeV/26/83/log.48310826._009107.job.log.tgz.1
[2026-01-27 14:46:57] workdir=None
[2026-01-27 14:46:57] ]
[2026-01-27 14:46:57] 2026-01-27 19:46:19,833 | INFO | transferring file log.48310826._009107.job.log.tgz.1 from /var/lib/boinc-client/slots/0/PanDA_Pilot-6982529453/log.48310826._009107.job.log.tgz.1 to /var/lib/boinc
[2026-01-27 14:46:57] 2026-01-27 19:46:19,834 | INFO | executing command: /usr/bin/env mv /var/lib/boinc-client/slots/0/PanDA_Pilot-6982529453/log.48310826._009107.job.log.tgz.1 /var/lib/boinc-client/slots/0/log.483108
[2026-01-27 14:46:57] 2026-01-27 19:46:19,843 | INFO | adding to output.list: log.48310826._009107.job.log.tgz.1 davs://dav.ndgf.org:443/atlas/disk/atlasdatadisk/rucio/mc23_13p6TeV/26/83/log.48310826._009107.job.log.tg
[2026-01-27 14:46:57] 2026-01-27 19:46:19,843 | INFO | alt stage-out settings: ['pl', 'write_lan', 'w', 'default'], allow_altstageout=False, remain_files=0, has_altstorage=True
[2026-01-27 14:46:57] 2026-01-27 19:46:19,843 | INFO | summary of transferred files:
[2026-01-27 14:46:57] 2026-01-27 19:46:19,843 | INFO | -- lfn=log.48310826._009107.job.log.tgz.1, status_code=0, status=transferred
[2026-01-27 14:46:57] 2026-01-27 19:46:19,843 | INFO | stage-out finished correctly
[2026-01-27 14:46:57] 2026-01-27 19:46:19,920 | WARNING | process 1360444 can no longer be monitored (due to stat problems) - aborting
[2026-01-27 14:46:57] 2026-01-27 19:46:19,920 | INFO | using path: /var/lib/boinc-client/slots/0/PanDA_Pilot-6982529453/memory_monitor_summary.json (trf name=prmon)
[2026-01-27 14:46:57] 2026-01-27 19:46:20,004 | INFO | number of running child processes to parent process 1360444: 1
[2026-01-27 14:46:57] 2026-01-27 19:46:20,004 | INFO | maximum number of monitored processes: 6
[2026-01-27 14:46:57] 2026-01-27 19:46:20,532 | INFO | time since job start (3467s) is within the limit (349056.0s)
[2026-01-27 14:46:57] 2026-01-27 19:46:22,507 | INFO | monitor loop #266: job 0:6982529453 is in state 'finished'
[2026-01-27 14:46:57] 2026-01-27 19:46:22,507 | INFO | will abort job monitoring soon since job state=finished (job is still in queue)
[2026-01-27 14:46:57] 2026-01-27 19:46:22,537 | INFO | time since job start (3469s) is within the limit (349056.0s)
[2026-01-27 14:46:57] 2026-01-27 19:46:23,066 | INFO | finished stage-out for finished payload, adding job to finished_jobs queue
[2026-01-27 14:46:57] 2026-01-27 19:46:24,542 | INFO | time since job start (3471s) is within the limit (349056.0s)
[2026-01-27 14:46:57] 2026-01-27 19:46:24,563 | INFO | job 6982529453 has state=finished
[2026-01-27 14:46:57] 2026-01-27 19:46:24,564 | INFO | preparing for final server update for job 6982529453 in state='finished'
[2026-01-27 14:46:57] 2026-01-27 19:46:25,010 | INFO | monitor loop #267: job 0:6982529453 is in state 'finished'
[2026-01-27 14:46:57] 2026-01-27 19:46:25,010 | INFO | will abort job monitoring soon since job state=finished (job is still in queue)
[2026-01-27 14:46:57] 2026-01-27 19:46:25,044 | INFO | sent prmon JSON dictionary to logstash server (urllib method)
[2026-01-27 14:46:57] 2026-01-27 19:46:25,044 | INFO | reading metadata from: /var/lib/boinc-client/slots/0/PanDA_Pilot-6982529453/jobReport.json
[2026-01-27 14:46:57] 2026-01-27 19:46:25,045 | INFO | added worker_node to metadata from /var/lib/boinc-client/slots/0/workernode_map.json
[2026-01-27 14:46:57] 2026-01-27 19:46:25,045 | INFO | this job has now completed (state=finished)
[2026-01-27 14:46:57] 2026-01-27 19:46:25,045 | INFO | pilot will not update the server (heartbeat message will be written to file)
[2026-01-27 14:46:57] 2026-01-27 19:46:25,045 | INFO | log transfer has been attempted: DONE
[2026-01-27 14:46:57] 2026-01-27 19:46:25,045 | INFO | job 6982529453 has finished - writing final server update
[2026-01-27 14:46:57] 2026-01-27 19:46:25,045 | INFO | total number of processed events: 400 (read)
[2026-01-27 14:46:57] 2026-01-27 19:46:25,046 | INFO | using path: /var/lib/boinc-client/slots/0/PanDA_Pilot-6982529453/memory_monitor_summary.json (trf name=prmon)
[2026-01-27 14:46:57] 2026-01-27 19:46:25,046 | INFO | extracted standard info from prmon json
[2026-01-27 14:46:57] 2026-01-27 19:46:25,046 | INFO | extracted standard memory fields from prmon json
[2026-01-27 14:46:57] 2026-01-27 19:46:25,046 | WARNING | GPU info not found in prmon json: 'gpu'
[2026-01-27 14:46:57] 2026-01-27 19:46:25,047 | WARNING | format EVNTtoHITS has no such key: dbData
[2026-01-27 14:46:57] 2026-01-27 19:46:25,047 | WARNING | format EVNTtoHITS has no such key: dbTime
[2026-01-27 14:46:57] 2026-01-27 19:46:25,047 | INFO | fitting pss+swap vs Time
[2026-01-27 14:46:57] 2026-01-27 19:46:25,047 | INFO | sum of square deviations: 38744912.5
[2026-01-27 14:46:57] 2026-01-27 19:46:25,047 | INFO | sum of deviations: -1634284305.9999998
[2026-01-27 14:46:57] 2026-01-27 19:46:25,047 | INFO | mean x: 1769541527.5
[2026-01-27 14:46:57] 2026-01-27 19:46:25,047 | INFO | mean y: 2308515.36
[2026-01-27 14:46:57] 2026-01-27 19:46:25,047 | INFO | intersect: 74642661547.736
[2026-01-27 14:46:57] 2026-01-27 19:46:25,047 | INFO | chi2: 0.013429206678165374
[2026-01-27 14:46:57] 2026-01-27 19:46:25,047 | INFO | current memory leak: -42.18 B/s (using 50 data points, chi2=0.01)
[2026-01-27 14:46:57] 2026-01-27 19:46:25,047 | INFO | could have reported an average CPU frequency of 4771 MHz (5 samples)
[2026-01-27 14:46:57] 2026-01-27 19:46:25,047 | INFO | ..............................
[2026-01-27 14:46:57] 2026-01-27 19:46:25,047 | INFO | . Timing measurements:
[2026-01-27 14:46:57] 2026-01-27 19:46:25,048 | INFO | . get job = 0 s
[2026-01-27 14:46:57] 2026-01-27 19:46:25,048 | INFO | . initial setup = 0 s
[2026-01-27 14:46:57] 2026-01-27 19:46:25,048 | INFO | . payload setup = 1 s
[2026-01-27 14:46:57] 2026-01-27 19:46:25,048 | INFO | . stage-in = 0 s
[2026-01-27 14:46:57] 2026-01-27 19:46:25,048 | INFO | . payload execution = 3447 s
[2026-01-27 14:46:57] 2026-01-27 19:46:25,048 | INFO | . stage-out = 2 s
[2026-01-27 14:46:57] 2026-01-27 19:46:25,048 | INFO | . log creation = 0 s
[2026-01-27 14:46:57] 2026-01-27 19:46:25,048 | INFO | ..............................
[2026-01-27 14:46:57] 2026-01-27 19:46:25,068 | INFO |
[2026-01-27 14:46:57] 2026-01-27 19:46:25,069 | INFO | job summary report
[2026-01-27 14:46:57] 2026-01-27 19:46:25,069 | INFO | --------------------------------------------------
[2026-01-27 14:46:57] 2026-01-27 19:46:25,069 | INFO | PanDA job id: 6982529453
[2026-01-27 14:46:57] 2026-01-27 19:46:25,069 | INFO | task id: 48310826
[2026-01-27 14:46:57] 2026-01-27 19:46:25,069 | INFO | errors: (none)
[2026-01-27 14:46:57] 2026-01-27 19:46:25,069 | INFO | status: LOG_TRANSFER = DONE
[2026-01-27 14:46:57] 2026-01-27 19:46:25,069 | INFO | pilot state: finished
[2026-01-27 14:46:57] 2026-01-27 19:46:25,069 | INFO | transexitcode: 0
[2026-01-27 14:46:57] 2026-01-27 19:46:25,069 | INFO | exeerrorcode: 0
[2026-01-27 14:46:57] 2026-01-27 19:46:25,069 | INFO | exeerrordiag:
[2026-01-27 14:46:57] 2026-01-27 19:46:25,069 | INFO | exitcode: 0
[2026-01-27 14:46:57] 2026-01-27 19:46:25,069 | INFO | exitmsg: OK
[2026-01-27 14:46:57] 2026-01-27 19:46:25,069 | INFO | cpuconsumptiontime: 9930 s
[2026-01-27 14:46:57] 2026-01-27 19:46:25,069 | INFO | nevents: 400
[2026-01-27 14:46:57] 2026-01-27 19:46:25,069 | INFO | neventsw: 0
[2026-01-27 14:46:57] 2026-01-27 19:46:25,069 | INFO | pid: 1360444
[2026-01-27 14:46:57] 2026-01-27 19:46:25,069 | INFO | pgrp: 1360444
[2026-01-27 14:46:57] 2026-01-27 19:46:25,069 | INFO | corecount: 3
[2026-01-27 14:46:57] 2026-01-27 19:46:25,069 | INFO | event service: False
[2026-01-27 14:46:57] 2026-01-27 19:46:25,069 | INFO | sizes: {0: 2284248, 10: 2284248, 3452: 2309902, 3453: 2309901, 3456: 2318969, 3459: 2319025, 3461: 2319379}
[2026-01-27 14:46:57] 2026-01-27 19:46:25,069 | INFO | --------------------------------------------------
[2026-01-27 14:46:57] 2026-01-27 19:46:25,069 | INFO |
[2026-01-27 14:46:57] 2026-01-27 19:46:25,069 | INFO | executing command: ls -lF /var/lib/boinc-client/slots/0
[2026-01-27 14:46:57] 2026-01-27 19:46:25,081 | INFO | queue jobs had 0 job(s) [purged]
[2026-01-27 14:46:57] 2026-01-27 19:46:25,081 | INFO | queue payloads had 0 job(s) [purged]
[2026-01-27 14:46:57] 2026-01-27 19:46:25,081 | INFO | queue data_in had 0 job(s) [purged]
[2026-01-27 14:46:57] 2026-01-27 19:46:25,081 | INFO | queue data_out had 0 job(s) [purged]
[2026-01-27 14:46:57] 2026-01-27 19:46:25,081 | INFO | queue current_data_in had 0 job(s) [purged]
[2026-01-27 14:46:57] 2026-01-27 19:46:25,081 | INFO | queue validated_jobs had 0 job(s) [purged]
[2026-01-27 14:46:57] 2026-01-27 19:46:25,081 | INFO | queue validated_payloads had 0 job(s) [purged]
[2026-01-27 14:46:57] 2026-01-27 19:46:25,081 | INFO | queue monitored_payloads had 0 job(s) [purged]
[2026-01-27 14:46:57] 2026-01-27 19:46:25,081 | INFO | queue finished_jobs had 0 job(s) [purged]
[2026-01-27 14:46:57] 2026-01-27 19:46:25,081 | INFO | queue finished_payloads had 0 job(s) [purged]
[2026-01-27 14:46:57] 2026-01-27 19:46:25,081 | INFO | queue finished_data_in had 1 job(s) [purged]
[2026-01-27 14:46:57] 2026-01-27 19:46:25,081 | INFO | queue finished_data_out had 0 job(s) [purged]
[2026-01-27 14:46:57] 2026-01-27 19:46:25,081 | INFO | queue failed_jobs had 0 job(s) [purged]
[2026-01-27 14:46:57] 2026-01-27 19:46:25,081 | INFO | queue failed_payloads had 0 job(s) [purged]
[2026-01-27 14:46:57] 2026-01-27 19:46:25,081 | INFO | queue failed_data_in had 0 job(s) [purged]
[2026-01-27 14:46:57] 2026-01-27 19:46:25,081 | INFO | queue failed_data_out had 0 job(s) [purged]
[2026-01-27 14:46:57] 2026-01-27 19:46:25,081 | INFO | queue completed_jobs had 0 job(s) [purged]
[2026-01-27 14:46:57] 2026-01-27 19:46:25,081 | INFO | queue completed_jobids has 1 job(s)
[2026-01-27 14:46:57] 2026-01-27 19:46:25,081 | INFO | queue realtimelog_payloads had 0 job(s) [purged]
[2026-01-27 14:46:57] 2026-01-27 19:46:25,082 | INFO | queue messages had 0 job(s) [purged]
[2026-01-27 14:46:57] 2026-01-27 19:46:25,082 | INFO | job 6982529453 has completed (purged errors)
[2026-01-27 14:46:57] 2026-01-27 19:46:25,082 | INFO | overall cleanup function is called
[2026-01-27 14:46:57] 2026-01-27 19:46:26,087 | INFO | --- collectZombieJob: --- 10, [1360444]
[2026-01-27 14:46:57] 2026-01-27 19:46:26,087 | INFO | zombie collector waiting for pid 1360444
[2026-01-27 14:46:57] 2026-01-27 19:46:26,087 | INFO | harmless exception when collecting zombies: [Errno 10] No child processes
[2026-01-27 14:46:57] 2026-01-27 19:46:26,087 | INFO | collected zombie processes
[2026-01-27 14:46:57] 2026-01-27 19:46:26,087 | INFO | will attempt to kill all subprocesses of pid=1360444
[2026-01-27 14:46:57] 2026-01-27 19:46:26,169 | INFO | process IDs to be killed: [1360444] (in reverse order)
[2026-01-27 14:46:57] 2026-01-27 19:46:26,210 | WARNING | found no corresponding commands to process id(s)
[2026-01-27 14:46:57] 2026-01-27 19:46:26,210 | INFO | Do not look for orphan processes in BOINC jobs
[2026-01-27 14:46:57] 2026-01-27 19:46:26,212 | INFO | did not find any defunct processes belonging to 1360444
[2026-01-27 14:46:57] 2026-01-27 19:46:26,214 | INFO | did not find any defunct processes belonging to 1360444
[2026-01-27 14:46:57] 2026-01-27 19:46:26,215 | INFO | ready for new job
[2026-01-27 14:46:57] 2026-01-27 19:46:26,215 | INFO | pilot has finished with previous job - re-establishing logging
[2026-01-27 14:46:57] 2026-01-27 19:46:26,215 | INFO | **************************************
[2026-01-27 14:46:57] 2026-01-27 19:46:26,215 | INFO | *** PanDA Pilot version 3.11.3.9 ***
[2026-01-27 14:46:57] 2026-01-27 19:46:26,215 | INFO | **************************************
[2026-01-27 14:46:57] 2026-01-27 19:46:26,216 | INFO |
[2026-01-27 14:46:57] 2026-01-27 19:46:26,216 | INFO | architecture information:
[2026-01-27 14:46:57] 2026-01-27 19:46:26,216 | INFO | executing command: cat /etc/os-release
[2026-01-27 14:46:57] 2026-01-27 19:46:26,225 | INFO | cat /etc/os-release:
[2026-01-27 14:46:57] NAME="CentOS Linux"
[2026-01-27 14:46:57] VERSION="7 (Core)"
[2026-01-27 14:46:57] ID="centos"
[2026-01-27 14:46:57] ID_LIKE="rhel fedora"
[2026-01-27 14:46:57] VERSION_ID="7"
[2026-01-27 14:46:57] PRETTY_NAME="CentOS Linux 7 (Core)"
[2026-01-27 14:46:57] ANSI_COLOR="0;31"
[2026-01-27 14:46:57] CPE_NAME="cpe:/o:centos:centos:7"
[2026-01-27 14:46:57] HOME_URL="https://www.centos.org/"
[2026-01-27 14:46:57] BUG_REPORT_URL="https://bugs.centos.org/"
[2026-01-27 14:46:57]
[2026-01-27 14:46:57] CENTOS_MANTISBT_PROJECT="CentOS-7"
[2026-01-27 14:46:57] CENTOS_MANTISBT_PROJECT_VERSION="7"
[2026-01-27 14:46:57] REDHAT_SUPPORT_PRODUCT="centos"
[2026-01-27 14:46:57] REDHAT_SUPPORT_PRODUCT_VERSION="7"
[2026-01-27 14:46:57]
[2026-01-27 14:46:57] 2026-01-27 19:46:26,225 | INFO | **************************************
[2026-01-27 14:46:57] 2026-01-27 19:46:26,727 | INFO | executing command: df -mP /var/lib/boinc-client/slots/0
[2026-01-27 14:46:57] 2026-01-27 19:46:26,735 | INFO | sufficient remaining disk space (419482828800 B)
[2026-01-27 14:46:57] 2026-01-27 19:46:26,736 | WARNING | since timefloor is set to 0, pilot was only allowed to run one job
[2026-01-27 14:46:57] 2026-01-27 19:46:26,736 | INFO | current server update state: UPDATING_FINAL
[2026-01-27 14:46:57] 2026-01-27 19:46:26,736 | INFO | update_server=False
[2026-01-27 14:46:57] 2026-01-27 19:46:26,736 | WARNING | setting graceful_stop since proceed_with_getjob() returned False (pilot will end)
[2026-01-27 14:46:57] 2026-01-27 19:46:26,736 | WARNING | job:job_monitor:received graceful stop - abort after this iteration
[2026-01-27 14:46:57] 2026-01-27 19:46:26,736 | WARNING | job:queue_monitor:received graceful stop - abort after this iteration
[2026-01-27 14:46:57] 2026-01-27 19:46:26,736 | INFO | aborting loop
[2026-01-27 14:46:57] 2026-01-27 19:46:27,030 | INFO | all data control threads have been joined
[2026-01-27 14:46:57] 2026-01-27 19:46:27,235 | INFO | all payload control threads have been joined
[2026-01-27 14:46:57] 2026-01-27 19:46:27,740 | INFO | [job] retrieve thread has finished
[2026-01-27 14:46:57] 2026-01-27 19:46:27,740 | INFO | [job] job monitor thread has finished
[2026-01-27 14:46:57] 2026-01-27 19:46:27,741 | INFO | [job] queue monitor thread has finished
[2026-01-27 14:46:57] 2026-01-27 19:46:27,878 | WARNING | data:copytool_out:received graceful stop - abort after this iteration
[2026-01-27 14:46:57] 2026-01-27 19:46:27,971 | INFO | [payload] validate_pre thread has finished
[2026-01-27 14:46:57] 2026-01-27 19:46:28,035 | INFO | [data] control thread has finished
[2026-01-27 14:46:57] 2026-01-27 19:46:28,046 | INFO | [data] copytool_in thread has finished
[2026-01-27 14:46:57] 2026-01-27 19:46:28,120 | INFO | [job] create_data_payload thread has finished
[2026-01-27 14:46:57] 2026-01-27 19:46:28,122 | INFO | [job] validate thread has finished
[2026-01-27 14:46:57] 2026-01-27 19:46:28,240 | INFO | [payload] control thread has finished
[2026-01-27 14:46:57] 2026-01-27 19:46:28,249 | INFO | [payload] run_realtimelog thread has finished
[2026-01-27 14:46:57] 2026-01-27 19:46:28,277 | INFO | [payload] failed_post thread has finished
[2026-01-27 14:46:57] 2026-01-27 19:46:28,342 | INFO | all job control threads have been joined
[2026-01-27 14:46:57] 2026-01-27 19:46:28,968 | INFO | [payload] validate_post thread has finished
[2026-01-27 14:46:57] 2026-01-27 19:46:29,094 | WARNING | data:queue_monitoring:received graceful stop - abort after this iteration
[2026-01-27 14:46:57] 2026-01-27 19:46:29,215 | INFO | [payload] execute_payloads thread has finished
[2026-01-27 14:46:57] 2026-01-27 19:46:29,347 | INFO | [job] control thread has finished
[2026-01-27 14:46:57] 2026-01-27 19:46:29,883 | INFO | [data] copytool_out thread has finished
[2026-01-27 14:46:57] 2026-01-27 19:46:33,099 | INFO | [data] queue_monitor thread has finished
[2026-01-27 14:46:57] 2026-01-27 19:46:38,686 | INFO | [monitor] cgroup control has ended
[2026-01-27 14:46:57] 2026-01-27 19:46:38,800 | INFO | only monitor.control thread still running - safe to abort: ['<_MainThread(MainThread, started 137447252420416)>', '<ExcThread(monitor, started 137446914246400)>']
[2026-01-27 14:46:57] 2026-01-27 19:46:43,825 | INFO | all workflow threads have been joined
[2026-01-27 14:46:57] 2026-01-27 19:46:43,825 | INFO | end of generic workflow (traces error code: 0)
[2026-01-27 14:46:57] 2026-01-27 19:46:43,826 | INFO | traces error code: 0
[2026-01-27 14:46:57] 2026-01-27 19:46:43,826 | INFO | pilot has finished (exit code=0, shell exit code=0)
[2026-01-27 14:46:57] 2026-01-27 19:46:57,671 | INFO | PID=1353211 has CPU usage=1.6% CMD=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/x86_64/python/3.9.20-x86_64-centos7/bin/python3 pilot3/pilot.py -q BOINC_MCORE -i P
[2026-01-27 14:46:57] 2026-01-27 19:46:57,671 | INFO | .. there are 4 such processes running
[2026-01-27 14:46:57] 2026-01-27 19:46:57,671 | INFO | found 0 job(s) in 20 queues
[2026-01-27 14:46:57] 2026-01-27 19:46:57,671 | WARNING | pilot monitor received instruction that args.graceful_stop has been set
[2026-01-27 14:46:57] 2026-01-27 19:46:57,671 | WARNING | will wait for a maximum of 300 s for threads to finish
[2026-01-27 14:46:57] 2026-01-27 19:46:57,671 | WARNING | job_aborted has been set - aborting pilot monitoring
[2026-01-27 14:46:57] 2026-01-27 19:46:57,671 | INFO | [monitor] control thread has ended
[2026-01-27 14:46:57] 2026-01-27 19:46:57,736 [wrapper] ==== pilot stdout END ====
[2026-01-27 14:46:57] 2026-01-27 19:46:57,737 [wrapper] ==== wrapper stdout RESUME ====
[2026-01-27 14:46:57] 2026-01-27 19:46:57,739 [wrapper] pilotpid: 1353211
[2026-01-27 14:46:57] 2026-01-27 19:46:57,740 [wrapper] Pilot exit status: 0
[2026-01-27 14:46:57] 2026-01-27 19:46:57,745 [wrapper] pandaids: 6982529453 6982529453 6982529453 6982529453 6982529453 6982529453 6982529453 6982529453
[2026-01-27 14:46:57] 2026-01-27 19:46:57,759 [wrapper] cleanup supervisor_pilot 1394706 1353213
[2026-01-27 14:46:57] 2026-01-27 19:46:57,760 [wrapper] Test setup, not cleaning
[2026-01-27 14:46:57] 2026-01-27 19:46:57,761 [wrapper] apfmon messages muted
[2026-01-27 14:46:57] 2026-01-27 19:46:57,762 [wrapper] ==== wrapper stdout END ====
[2026-01-27 14:46:57] 2026-01-27 19:46:57,764 [wrapper] ==== wrapper stderr END ====
[2026-01-27 14:46:57] *** Error codes and diagnostics ***
[2026-01-27 14:46:57] "exeErrorCode": 0,
[2026-01-27 14:46:57] "exeErrorDiag": "",
[2026-01-27 14:46:57] "pilotErrorCode": 0,
[2026-01-27 14:46:57] "pilotErrorDiag": "",
[2026-01-27 14:46:57] *** Listing of results directory ***
[2026-01-27 14:46:57] total 1146228
[2026-01-27 14:46:57] drwxr-xr-x 5 320 320 4096 Jan 14 05:00 pilot3
[2026-01-27 14:46:57] -rwx------ 1 4871 1028 36322 Jan 25 00:14 runpilot2-wrapper.sh
[2026-01-27 14:46:57] -rw-r--r-- 1 root root 5111 Jan 25 00:15 queuedata.json
[2026-01-27 14:46:57] -rw-r--r-- 1 root root 584422 Jan 25 00:15 pilot3.tar.gz
[2026-01-27 14:46:57] -rw-r--r-- 1 root root 100 Jan 27 10:55 wrapper_26015_x86_64-pc-linux-gnu
[2026-01-27 14:46:57] -rwxr-xr-x 1 root root 7986 Jan 27 10:55 run_atlas
[2026-01-27 14:46:57] -rw-r--r-- 1 root root 105 Jan 27 10:55 job.xml
[2026-01-27 14:46:57] -rw-r--r-- 3 root root 494544272 Jan 27 10:55 EVNT.48310824._000101.pool.root.1
[2026-01-27 14:46:57] -rw-r--r-- 3 root root 494544272 Jan 27 10:55 ATLAS.root_0
[2026-01-27 14:46:57] drwxrwx--x 2 root root 4096 Jan 27 10:55 shared
[2026-01-27 14:46:57] -rw-r--r-- 2 root root 597754 Jan 27 10:55 input.tar.gz
[2026-01-27 14:46:57] -rw-r--r-- 2 root root 15845 Jan 27 10:55 start_atlas.sh
[2026-01-27 14:46:57] -rw-r--r-- 1 root root 0 Jan 27 10:55 boinc_lockfile
[2026-01-27 14:46:57] -rw-r--r-- 1 root root 9378 Jan 27 13:48 init_data.xml
[2026-01-27 14:46:57] -rw-r--r-- 1 root root 2523 Jan 27 13:48 pandaJob.out
[2026-01-27 14:46:57] -rw------- 1 root root 1005244 Jan 27 13:48 agis_schedconf.cvmfs.json
[2026-01-27 14:46:57] -rw------- 1 root root 414 Jan 27 13:48 workernode_map.json
[2026-01-27 14:46:57] -rw------- 1 root root 179137970 Jan 27 14:45 HITS.48310826._009107.pool.root.1
[2026-01-27 14:46:57] -rw------- 1 root root 97 Jan 27 14:45 pilot_heartbeat.json
[2026-01-27 14:46:57] -rw-r--r-- 1 root root 532 Jan 27 14:46 boinc_task_state.xml
[2026-01-27 14:46:57] -rw------- 1 root root 1019 Jan 27 14:46 memory_monitor_summary.json
[2026-01-27 14:46:57] -rw------- 1 root root 1516246 Jan 27 14:46 agis_ddmendpoints.agis.ALL.json
[2026-01-27 14:46:57] -rw------- 1 root root 233034 Jan 27 14:46 log.48310826._009107.job.log.tgz.1
[2026-01-27 14:46:57] -rw------- 1 root root 6260 Jan 27 14:46 heartbeat.json
[2026-01-27 14:46:57] -rw-r--r-- 1 root root 8192 Jan 27 14:46 boinc_mmap_file
[2026-01-27 14:46:57] -rw-r--r-- 1 root root 28 Jan 27 14:46 wrapper_checkpoint.txt
[2026-01-27 14:46:57] -rw------- 1 root root 822 Jan 27 14:46 pilotlog.txt
[2026-01-27 14:46:57] -rw------- 1 root root 536982 Jan 27 14:46 log.48310826._009107.job.log.1
[2026-01-27 14:46:57] -rw------- 1 root root 357 Jan 27 14:46 output.list
[2026-01-27 14:46:57] -rw-r--r-- 1 root root 620 Jan 27 14:46 runtime_log
[2026-01-27 14:46:57] -rw------- 1 root root 788480 Jan 27 14:46 result.tar.gz
[2026-01-27 14:46:57] -rw------- 1 root root 1514 Jan 27 14:46 xzcMDmEd738n9Rq4apOajLDm4fhM0noT9bVo0NGKDmb7zKDmPAaIXm.diag
[2026-01-27 14:46:57] -rw-r--r-- 1 root root 8955 Jan 27 14:46 runtime_log.err
[2026-01-27 14:46:57] -rw-r--r-- 1 root root 38007 Jan 27 14:46 stderr.txt
[2026-01-27 14:46:57] HITS file was successfully produced:
[2026-01-27 14:46:57] -rw------- 1 root root 179137970 Jan 27 14:45 shared/HITS.pool.root.1
[2026-01-27 14:46:57] *** Contents of shared directory: ***
[2026-01-27 14:46:57] total 659276
[2026-01-27 14:46:57] -rw-r--r-- 3 root root 494544272 Jan 27 10:55 ATLAS.root_0
[2026-01-27 14:46:57] -rw-r--r-- 2 root root 597754 Jan 27 10:55 input.tar.gz
[2026-01-27 14:46:57] -rw-r--r-- 2 root root 15845 Jan 27 10:55 start_atlas.sh
[2026-01-27 14:46:57] -rw------- 1 root root 179137970 Jan 27 14:45 HITS.pool.root.1
[2026-01-27 14:46:57] -rw------- 1 root root 788480 Jan 27 14:46 result.tar.gz
14:46:59 (1341696): run_atlas exited; CPU time 9945.041947
14:46:59 (1341696): called boinc_finish(0)
</stderr_txt>
]]>
©2026 CERN