| Name | r47KDmFiV98n9Rq4apOajLDm4fhM0noT9bVooX4SDmuIeKDmHNJMrn_1 |
| Workunit | 239034645 |
| Created | 9 Feb 2026, 5:59:42 UTC |
| Sent | 9 Feb 2026, 6:36:34 UTC |
| Report deadline | 17 Feb 2026, 6:36:34 UTC |
| Received | 9 Feb 2026, 21:12:30 UTC |
| Server state | Over |
| Outcome | Success |
| Client state | Done |
| Exit status | 0 (0x00000000) |
| Computer ID | 10800917 |
| Run time | 1 hours 34 min 8 sec |
| CPU time | 5 hours 54 min 22 sec |
| Priority | 28 |
| Validate state | Valid |
| Credit | 523.05 |
| Device peak FLOPS | 4.00 GFLOPS |
| Application version | ATLAS Simulation v3.01 (native_mt) x86_64-pc-linux-gnu |
| Peak working set size | 2.53 GB |
| Peak swap size | 2.87 GB |
| Peak disk usage | 1.54 GB |
<core_client_version>8.2.2</core_client_version> <![CDATA[ <stderr_txt> 19:37:04 (4031676): wrapper (7.7.26015): starting 19:37:04 (4031676): wrapper: running run_atlas (--nthreads 4) [2026-02-09 19:37:04] Arguments: --nthreads 4 [2026-02-09 19:37:04] Threads: 4 [2026-02-09 19:37:04] Checking for CVMFS [2026-02-09 19:37:04] Probing /cvmfs/atlas.cern.ch... OK [2026-02-09 19:37:05] Probing /cvmfs/atlas-condb.cern.ch... OK [2026-02-09 19:37:05] Running cvmfs_config stat atlas.cern.ch [2026-02-09 19:37:05] VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE [2026-02-09 19:37:05] 2.9.2.0 3088581 4927 91228 156061 2 50 36596658 36864001 0 130560 0 2705289 99.987 177673 14097 http://cvmfs-stratum-one.cern.ch:8000/cvmfs/atlas.cern.ch http://130.183.36.13:3128 1 [2026-02-09 19:37:05] CVMFS is ok [2026-02-09 19:37:05] Efficiency of ATLAS tasks can be improved by the following measure(s): [2026-02-09 19:37:05] The CVMFS client on this computer should be configured to use Cloudflare's openhtc.io. [2026-02-09 19:37:05] Further information can be found at the LHC@home message board. [2026-02-09 19:37:05] Using apptainer image /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 [2026-02-09 19:37:05] Checking for apptainer binary... [2026-02-09 19:37:05] Using apptainer found in PATH at /usr/bin/apptainer [2026-02-09 19:37:05] Running /usr/bin/apptainer --version [2026-02-09 19:37:05] apptainer version 1.4.1-1.1 [2026-02-09 19:37:05] Checking apptainer works with /usr/bin/apptainer exec -B /cvmfs /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 hostname [2026-02-09 19:37:05] thA315b [2026-02-09 19:37:05] apptainer works [2026-02-09 19:37:05] Set ATHENA_PROC_NUMBER=4 [2026-02-09 19:37:05] Set ATHENA_CORE_NUMBER=4 [2026-02-09 19:37:05] Starting ATLAS job with PandaID=7008496032 [2026-02-09 19:37:05] Running command: /usr/bin/apptainer exec -B /cvmfs,/local/data/boinc/slots/0 /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 sh start_atlas.sh [2026-02-09 21:11:29] *** The last 200 lines of the pilot log: *** [2026-02-09 21:11:29] 2026-02-09 20:10:38,604 | INFO | [2026-02-09 21:11:29] 2026-02-09 20:10:38,605 | INFO | architecture information: [2026-02-09 21:11:29] 2026-02-09 20:10:38,605 | INFO | executing command: cat /etc/os-release [2026-02-09 21:11:29] 2026-02-09 20:10:38,615 | INFO | cat /etc/os-release: [2026-02-09 21:11:29] NAME="CentOS Linux" [2026-02-09 21:11:29] VERSION="7 (Core)" [2026-02-09 21:11:29] ID="centos" [2026-02-09 21:11:29] ID_LIKE="rhel fedora" [2026-02-09 21:11:29] VERSION_ID="7" [2026-02-09 21:11:29] PRETTY_NAME="CentOS Linux 7 (Core)" [2026-02-09 21:11:29] ANSI_COLOR="0;31" [2026-02-09 21:11:29] CPE_NAME="cpe:/o:centos:centos:7" [2026-02-09 21:11:29] HOME_URL="https://www.centos.org/" [2026-02-09 21:11:29] BUG_REPORT_URL="https://bugs.centos.org/" [2026-02-09 21:11:29] [2026-02-09 21:11:29] CENTOS_MANTISBT_PROJECT="CentOS-7" [2026-02-09 21:11:29] CENTOS_MANTISBT_PROJECT_VERSION="7" [2026-02-09 21:11:29] REDHAT_SUPPORT_PRODUCT="centos" [2026-02-09 21:11:29] REDHAT_SUPPORT_PRODUCT_VERSION="7" [2026-02-09 21:11:29] [2026-02-09 21:11:29] 2026-02-09 20:10:38,615 | INFO | ************************************** [2026-02-09 21:11:29] 2026-02-09 20:10:39,118 | INFO | executing command: df -mP /local/data/boinc/slots/0 [2026-02-09 21:11:29] 2026-02-09 20:10:39,132 | INFO | sufficient remaining disk space (317886300160 B) [2026-02-09 21:11:29] 2026-02-09 20:10:39,132 | WARNING | since timefloor is set to 0, pilot was only allowed to run one job [2026-02-09 21:11:29] 2026-02-09 20:10:39,132 | INFO | current server update state: UPDATING_FINAL [2026-02-09 21:11:29] 2026-02-09 20:10:39,132 | INFO | update_server=False [2026-02-09 21:11:29] 2026-02-09 20:10:39,132 | WARNING | setting graceful_stop since proceed_with_getjob() returned False (pilot will end) [2026-02-09 21:11:29] 2026-02-09 20:10:39,133 | WARNING | data:copytool_out:received graceful stop - abort after this iteration [2026-02-09 21:11:29] 2026-02-09 20:10:39,133 | WARNING | aborting monitor loop since graceful_stop has been set (timing out remaining threads) [2026-02-09 21:11:29] 2026-02-09 20:10:39,133 | INFO | found 0 job(s) in 20 queues [2026-02-09 21:11:29] 2026-02-09 20:10:39,133 | WARNING | pilot monitor received instruction that args.graceful_stop has been set [2026-02-09 21:11:29] 2026-02-09 20:10:39,133 | WARNING | will wait for a maximum of 300 s for threads to finish [2026-02-09 21:11:29] 2026-02-09 20:10:39,133 | WARNING | job:queue_monitor:received graceful stop - abort after this iteration [2026-02-09 21:11:29] 2026-02-09 20:10:39,368 | INFO | all job control threads have been joined [2026-02-09 21:11:29] 2026-02-09 20:10:39,569 | WARNING | job monitor detected an abort_job request (signal=args.signal) [2026-02-09 21:11:29] 2026-02-09 20:10:39,569 | WARNING | cannot recover job monitoring - aborting pilot [2026-02-09 21:11:29] 2026-02-09 20:10:39,569 | WARNING | job:job_monitor:received graceful stop - abort after this iteration [2026-02-09 21:11:29] 2026-02-09 20:10:39,569 | INFO | will abort loop [2026-02-09 21:11:29] 2026-02-09 20:10:39,575 | INFO | all data control threads have been joined [2026-02-09 21:11:29] 2026-02-09 20:10:40,138 | INFO | [job] retrieve thread has finished [2026-02-09 21:11:29] 2026-02-09 20:10:40,138 | INFO | [job] queue monitor thread has finished [2026-02-09 21:11:29] 2026-02-09 20:10:40,242 | INFO | all payload control threads have been joined [2026-02-09 21:11:29] 2026-02-09 20:10:40,260 | INFO | [data] copytool_in thread has finished [2026-02-09 21:11:29] 2026-02-09 20:10:40,307 | INFO | [payload] execute_payloads thread has finished [2026-02-09 21:11:29] 2026-02-09 20:10:40,374 | INFO | [job] control thread has finished [2026-02-09 21:11:29] 2026-02-09 20:10:40,575 | INFO | [job] job monitor thread has finished [2026-02-09 21:11:29] 2026-02-09 20:10:40,581 | INFO | [data] control thread has finished [2026-02-09 21:11:29] 2026-02-09 20:10:40,618 | INFO | [payload] run_realtimelog thread has finished [2026-02-09 21:11:29] 2026-02-09 20:10:40,785 | INFO | [payload] failed_post thread has finished [2026-02-09 21:11:29] 2026-02-09 20:10:41,139 | INFO | [data] copytool_out thread has finished [2026-02-09 21:11:29] 2026-02-09 20:10:41,247 | INFO | [payload] control thread has finished [2026-02-09 21:11:29] 2026-02-09 20:10:41,317 | INFO | [payload] validate_post thread has finished [2026-02-09 21:11:29] 2026-02-09 20:10:41,524 | INFO | [job] validate thread has finished [2026-02-09 21:11:29] 2026-02-09 20:10:41,582 | INFO | [job] create_data_payload thread has finished [2026-02-09 21:11:29] 2026-02-09 20:10:41,622 | INFO | [payload] validate_pre thread has finished [2026-02-09 21:11:29] 2026-02-09 20:10:41,656 | WARNING | data:queue_monitoring:received graceful stop - abort after this iteration [2026-02-09 21:11:29] 2026-02-09 20:10:45,663 | INFO | [data] queue_monitor thread has finished [2026-02-09 21:11:29] 2026-02-09 20:11:23,393 | INFO | [monitor] cgroup control has ended [2026-02-09 21:11:29] 2026-02-09 20:11:24,399 | INFO | only monitor.control thread still running - safe to abort: ['<_MainThread(MainThread, started 139944836364096)>', '<ExcThread(monitor, started 139943942788864)>'] [2026-02-09 21:11:29] 2026-02-09 20:11:25,379 | WARNING | job_aborted has been set - aborting pilot monitoring [2026-02-09 21:11:29] 2026-02-09 20:11:25,379 | INFO | [monitor] control thread has ended [2026-02-09 21:11:29] 2026-02-09 20:11:29,421 | INFO | all workflow threads have been joined [2026-02-09 21:11:29] 2026-02-09 20:11:29,421 | INFO | end of generic workflow (traces error code: 0) [2026-02-09 21:11:29] 2026-02-09 20:11:29,421 | INFO | traces error code: 0 [2026-02-09 21:11:29] 2026-02-09 20:11:29,422 | INFO | pilot has finished (exit code=0, shell exit code=0) [2026-02-09 21:11:29] 2026-02-09 20:11:29,476 [wrapper] ==== pilot stdout END ==== [2026-02-09 21:11:29] 2026-02-09 20:11:29,479 [wrapper] ==== wrapper stdout RESUME ==== [2026-02-09 21:11:29] 2026-02-09 20:11:29,481 [wrapper] pilotpid: 4035804 [2026-02-09 21:11:29] 2026-02-09 20:11:29,484 [wrapper] Pilot exit status: 0 [2026-02-09 21:11:29] 2026-02-09 20:11:29,491 [wrapper] pandaids: 7008496032 [2026-02-09 21:11:29] 2026-02-09 20:11:29,511 [wrapper] cleanup supervisor_pilot 4045351 4035805 [2026-02-09 21:11:29] 2026-02-09 20:11:29,514 [wrapper] Test setup, not cleaning [2026-02-09 21:11:29] 2026-02-09 20:11:29,516 [wrapper] apfmon messages muted [2026-02-09 21:11:29] 2026-02-09 20:11:29,519 [wrapper] ==== wrapper stdout END ==== [2026-02-09 21:11:29] 2026-02-09 20:11:29,521 [wrapper] ==== wrapper stderr END ==== [2026-02-09 21:11:29] *** Error codes and diagnostics *** [2026-02-09 21:11:29] "exeErrorCode": 0, [2026-02-09 21:11:29] "exeErrorDiag": "", [2026-02-09 21:11:29] "pilotErrorCode": 0, [2026-02-09 21:11:29] "pilotErrorDiag": "", [2026-02-09 21:11:29] *** Listing of results directory *** [2026-02-09 21:11:29] total 952016 [2026-02-09 21:11:29] -rw-r--r-- 1 boinc boinc 602657 Feb 3 12:11 pilot3.tar.gz [2026-02-09 21:11:29] -rwx------ 1 boinc boinc 36322 Feb 9 02:57 runpilot2-wrapper.sh [2026-02-09 21:11:29] -rw-r--r-- 1 boinc boinc 5111 Feb 9 02:59 queuedata.json [2026-02-09 21:11:29] -rw-r--r-- 1 boinc boinc 100 Feb 9 19:37 wrapper_26015_x86_64-pc-linux-gnu [2026-02-09 21:11:29] -rwxr-xr-x 1 boinc boinc 7986 Feb 9 19:37 run_atlas [2026-02-09 21:11:29] -rw-r--r-- 1 boinc boinc 105 Feb 9 19:37 job.xml [2026-02-09 21:11:29] -rw-r--r-- 2 boinc boinc 679626330 Feb 9 19:37 EVNT.48431094._000606.pool.root.1 [2026-02-09 21:11:29] drwxrwx--x 2 boinc boinc 4096 Feb 9 19:37 shared [2026-02-09 21:11:29] -rw-r--r-- 2 boinc boinc 616315 Feb 9 19:37 input.tar.gz [2026-02-09 21:11:29] -rw-r--r-- 2 boinc boinc 15845 Feb 9 19:37 start_atlas.sh [2026-02-09 21:11:29] -rw-r--r-- 1 boinc boinc 0 Feb 9 19:37 boinc_setup_complete [2026-02-09 21:11:29] -rw-r--r-- 1 boinc boinc 6443 Feb 9 19:37 init_data.xml [2026-02-09 21:11:29] -rw-r--r-- 1 boinc boinc 0 Feb 9 19:37 boinc_lockfile [2026-02-09 21:11:29] -rw-r--r-- 1 boinc boinc 2556 Feb 9 19:37 pandaJob.out [2026-02-09 21:11:29] -rw------- 1 boinc boinc 1009496 Feb 9 19:37 agis_schedconf.cvmfs.json [2026-02-09 21:11:29] -rw------- 1 boinc boinc 1513144 Feb 9 19:37 agis_ddmendpoints.agis.ALL.json [2026-02-09 21:11:29] -rw------- 1 boinc boinc 424 Feb 9 19:37 workernode_map.json [2026-02-09 21:11:29] drwx------ 7 boinc boinc 4096 Feb 9 19:37 pilot3 [2026-02-09 21:11:29] -rw------- 1 boinc boinc 95 Feb 9 21:09 pilot_heartbeat.json [2026-02-09 21:11:29] -rw------- 1 boinc boinc 289003809 Feb 9 21:10 HITS.48431096._013509.pool.root.1 [2026-02-09 21:11:29] -rw-r--r-- 1 boinc boinc 530 Feb 9 21:10 boinc_task_state.xml [2026-02-09 21:11:29] -rw------- 1 boinc boinc 1019 Feb 9 21:10 memory_monitor_summary.json [2026-02-09 21:11:29] -rw------- 1 boinc boinc 312569 Feb 9 21:10 log.48431096._013509.job.log.tgz.1 [2026-02-09 21:11:29] -rw------- 1 boinc boinc 6306 Feb 9 21:10 heartbeat.json [2026-02-09 21:11:29] -rw-r--r-- 1 boinc boinc 27 Feb 9 21:11 wrapper_checkpoint.txt [2026-02-09 21:11:29] -rw-r--r-- 1 boinc boinc 8192 Feb 9 21:11 boinc_mmap_file [2026-02-09 21:11:29] -rw------- 1 boinc boinc 4738 Feb 9 21:11 pilotlog.txt [2026-02-09 21:11:29] -rw------- 1 boinc boinc 819836 Feb 9 21:11 log.48431096._013509.job.log.1 [2026-02-09 21:11:29] -rw------- 1 boinc boinc 357 Feb 9 21:11 output.list [2026-02-09 21:11:29] -rw-r--r-- 1 boinc boinc 620 Feb 9 21:11 runtime_log [2026-02-09 21:11:29] -rw------- 1 boinc boinc 1146880 Feb 9 21:11 result.tar.gz [2026-02-09 21:11:29] -rw-r--r-- 1 boinc boinc 8901 Feb 9 21:11 runtime_log.err [2026-02-09 21:11:29] -rw------- 1 boinc boinc 658 Feb 9 21:11 r47KDmFiV98n9Rq4apOajLDm4fhM0noT9bVooX4SDmuIeKDmHNJMrn.diag [2026-02-09 21:11:29] -rw-r--r-- 1 boinc boinc 9183 Feb 9 21:11 stderr.txt [2026-02-09 21:11:29] HITS file was successfully produced: [2026-02-09 21:11:29] -rw------- 1 boinc boinc 289003809 Feb 9 21:10 shared/HITS.pool.root.1 [2026-02-09 21:11:29] *** Contents of shared directory: *** [2026-02-09 21:11:29] total 947680 [2026-02-09 21:11:29] -rw-r--r-- 2 boinc boinc 679626330 Feb 9 19:37 ATLAS.root_0 [2026-02-09 21:11:29] -rw-r--r-- 2 boinc boinc 616315 Feb 9 19:37 input.tar.gz [2026-02-09 21:11:29] -rw-r--r-- 2 boinc boinc 15845 Feb 9 19:37 start_atlas.sh [2026-02-09 21:11:29] -rw------- 1 boinc boinc 289003809 Feb 9 21:10 HITS.pool.root.1 [2026-02-09 21:11:29] -rw------- 1 boinc boinc 1146880 Feb 9 21:11 result.tar.gz 21:11:31 (4031676): run_atlas exited; CPU time 21243.703393 21:11:31 (4031676): called boinc_finish(0) </stderr_txt> ]]>
©2026 CERN