| Name | CC3NDmqRIg8n9Rq4apOajLDm4fhM0noT9bVo0NGKDmBaGKDmq10Bwm_2 |
| Workunit | 237686621 |
| Created | 28 Nov 2025, 19:54:46 UTC |
| Sent | 29 Nov 2025, 18:41:05 UTC |
| Report deadline | 7 Dec 2025, 18:41:05 UTC |
| Received | 2 Dec 2025, 20:46:02 UTC |
| Server state | Over |
| Outcome | Success |
| Client state | Done |
| Exit status | 0 (0x00000000) |
| Computer ID | 10695559 |
| Run time | 6 hours 53 min 28 sec |
| CPU time | 1 days 3 hours 2 min 13 sec |
| Validate state | Valid |
| Credit | 2,297.05 |
| Device peak FLOPS | 4.00 GFLOPS |
| Application version | ATLAS Simulation v3.01 (native_mt) x86_64-pc-linux-gnu |
| Peak working set size | 1.62 GB |
| Peak swap size | 2.16 GB |
| Peak disk usage | 1.63 GB |
<core_client_version>8.2.2</core_client_version> <![CDATA[ <stderr_txt> 13:49:11 (1793977): wrapper (7.7.26015): starting 13:49:11 (1793977): wrapper: running run_atlas (--nthreads 4) [2025-12-02 13:49:11] Arguments: --nthreads 4 [2025-12-02 13:49:11] Threads: 4 [2025-12-02 13:49:11] Checking for CVMFS [2025-12-02 13:49:11] Probing /cvmfs/atlas.cern.ch... OK [2025-12-02 13:49:11] Probing /cvmfs/atlas-condb.cern.ch... OK [2025-12-02 13:49:11] Running cvmfs_config stat atlas.cern.ch [2025-12-02 13:49:11] VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE [2025-12-02 13:49:11] 2.9.2.0 3127036 19998 101220 153550 2 81 36102008 36864001 0 130560 0 7976037 99.958 891448 1373 http://cvmfs-stratum-one.cern.ch:8000/cvmfs/atlas.cern.ch http://130.183.36.13:3128 1 [2025-12-02 13:49:11] CVMFS is ok [2025-12-02 13:49:11] Efficiency of ATLAS tasks can be improved by the following measure(s): [2025-12-02 13:49:11] The CVMFS client on this computer should be configured to use Cloudflare's openhtc.io. [2025-12-02 13:49:11] Further information can be found at the LHC@home message board. [2025-12-02 13:49:11] Using apptainer image /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 [2025-12-02 13:49:11] Checking for apptainer binary... [2025-12-02 13:49:11] Using apptainer found in PATH at /usr/bin/apptainer [2025-12-02 13:49:11] Running /usr/bin/apptainer --version [2025-12-02 13:49:11] apptainer version 1.4.1-1.1 [2025-12-02 13:49:11] Checking apptainer works with /usr/bin/apptainer exec -B /cvmfs /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 hostname [2025-12-02 13:49:11] thA334 [2025-12-02 13:49:11] apptainer works [2025-12-02 13:49:11] Set ATHENA_PROC_NUMBER=4 [2025-12-02 13:49:11] Set ATHENA_CORE_NUMBER=4 [2025-12-02 13:49:11] Starting ATLAS job with PandaID=6893852283 [2025-12-02 13:49:11] Running command: /usr/bin/apptainer exec -B /cvmfs,/local/data/boinc/slots/0 /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 sh start_atlas.sh [2025-12-02 20:43:57] *** The last 200 lines of the pilot log: *** [2025-12-02 20:43:57] 2025-12-02 19:43:28,063 | INFO | queue realtimelog_payloads had 0 job(s) [purged] [2025-12-02 20:43:57] 2025-12-02 19:43:28,063 | INFO | queue messages had 0 job(s) [purged] [2025-12-02 20:43:57] 2025-12-02 19:43:28,063 | INFO | job 6893852283 has completed (purged errors) [2025-12-02 20:43:57] 2025-12-02 19:43:28,064 | INFO | overall cleanup function is called [2025-12-02 20:43:57] 2025-12-02 19:43:29,073 | INFO | --- collectZombieJob: --- 10, [1804330] [2025-12-02 20:43:57] 2025-12-02 19:43:29,073 | INFO | zombie collector waiting for pid 1804330 [2025-12-02 20:43:57] 2025-12-02 19:43:29,073 | INFO | harmless exception when collecting zombies: [Errno 10] No child processes [2025-12-02 20:43:57] 2025-12-02 19:43:29,073 | INFO | collected zombie processes [2025-12-02 20:43:57] 2025-12-02 19:43:29,073 | INFO | will attempt to kill all subprocesses of pid=1804330 [2025-12-02 20:43:57] 2025-12-02 19:43:29,118 | INFO | process IDs to be killed: [1804330] (in reverse order) [2025-12-02 20:43:57] 2025-12-02 19:43:29,150 | WARNING | found no corresponding commands to process id(s) [2025-12-02 20:43:57] 2025-12-02 19:43:29,150 | INFO | Do not look for orphan processes in BOINC jobs [2025-12-02 20:43:57] 2025-12-02 19:43:29,152 | INFO | did not find any defunct processes belonging to 1804330 [2025-12-02 20:43:57] 2025-12-02 19:43:29,153 | INFO | did not find any defunct processes belonging to 1804330 [2025-12-02 20:43:57] 2025-12-02 19:43:29,154 | WARNING | condor_chirp not found. condor_config_val not found. | If you're on a worker node/container, HTCondor may not expose configs to your env. Try: export CONDOR_CONFIG [2025-12-02 20:43:57] 2025-12-02 19:43:29,154 | INFO | ready for new job [2025-12-02 20:43:57] 2025-12-02 19:43:29,154 | INFO | pilot has finished with previous job - re-establishing logging [2025-12-02 20:43:57] 2025-12-02 19:43:29,155 | INFO | *************************************** [2025-12-02 20:43:57] 2025-12-02 19:43:29,155 | INFO | *** PanDA Pilot version 3.11.1.15 *** [2025-12-02 20:43:57] 2025-12-02 19:43:29,155 | INFO | *************************************** [2025-12-02 20:43:57] 2025-12-02 19:43:29,155 | INFO | [2025-12-02 20:43:57] 2025-12-02 19:43:29,156 | INFO | architecture information: [2025-12-02 20:43:57] 2025-12-02 19:43:29,156 | INFO | executing command: cat /etc/os-release [2025-12-02 20:43:57] 2025-12-02 19:43:29,169 | INFO | cat /etc/os-release: [2025-12-02 20:43:57] NAME="CentOS Linux" [2025-12-02 20:43:57] VERSION="7 (Core)" [2025-12-02 20:43:57] ID="centos" [2025-12-02 20:43:57] ID_LIKE="rhel fedora" [2025-12-02 20:43:57] VERSION_ID="7" [2025-12-02 20:43:57] PRETTY_NAME="CentOS Linux 7 (Core)" [2025-12-02 20:43:57] ANSI_COLOR="0;31" [2025-12-02 20:43:57] CPE_NAME="cpe:/o:centos:centos:7" [2025-12-02 20:43:57] HOME_URL="https://www.centos.org/" [2025-12-02 20:43:57] BUG_REPORT_URL="https://bugs.centos.org/" [2025-12-02 20:43:57] [2025-12-02 20:43:57] CENTOS_MANTISBT_PROJECT="CentOS-7" [2025-12-02 20:43:57] CENTOS_MANTISBT_PROJECT_VERSION="7" [2025-12-02 20:43:57] REDHAT_SUPPORT_PRODUCT="centos" [2025-12-02 20:43:57] REDHAT_SUPPORT_PRODUCT_VERSION="7" [2025-12-02 20:43:57] [2025-12-02 20:43:57] 2025-12-02 19:43:29,169 | INFO | *************************************** [2025-12-02 20:43:57] 2025-12-02 19:43:29,672 | INFO | executing command: df -mP /local/data/boinc/slots/0 [2025-12-02 20:43:57] 2025-12-02 19:43:29,690 | INFO | sufficient remaining disk space (219949301760 B) [2025-12-02 20:43:57] 2025-12-02 19:43:29,690 | WARNING | since timefloor is set to 0, pilot was only allowed to run one job [2025-12-02 20:43:57] 2025-12-02 19:43:29,690 | INFO | current server update state: UPDATING_FINAL [2025-12-02 20:43:57] 2025-12-02 19:43:29,690 | INFO | update_server=False [2025-12-02 20:43:57] 2025-12-02 19:43:29,690 | WARNING | setting graceful_stop since proceed_with_getjob() returned False (pilot will end) [2025-12-02 20:43:57] 2025-12-02 19:43:29,690 | WARNING | job:queue_monitor:received graceful stop - abort after this iteration [2025-12-02 20:43:57] 2025-12-02 19:43:29,690 | WARNING | aborting monitor loop since graceful_stop has been set (timing out remaining threads) [2025-12-02 20:43:57] 2025-12-02 19:43:29,691 | INFO | found 0 job(s) in 20 queues [2025-12-02 20:43:57] 2025-12-02 19:43:29,691 | WARNING | pilot monitor received instruction that args.graceful_stop has been set [2025-12-02 20:43:57] 2025-12-02 19:43:29,691 | WARNING | will wait for a maximum of 300 s for threads to finish [2025-12-02 20:43:57] 2025-12-02 19:43:29,884 | INFO | all job control threads have been joined [2025-12-02 20:43:57] 2025-12-02 19:43:29,936 | WARNING | job monitor detected an abort_job request (signal=args.signal) [2025-12-02 20:43:57] 2025-12-02 19:43:29,937 | WARNING | cannot recover job monitoring - aborting pilot [2025-12-02 20:43:57] 2025-12-02 19:43:29,937 | WARNING | job:job_monitor:received graceful stop - abort after this iteration [2025-12-02 20:43:57] 2025-12-02 19:43:29,937 | INFO | will abort loop [2025-12-02 20:43:57] 2025-12-02 19:43:30,211 | WARNING | data:copytool_out:received graceful stop - abort after this iteration [2025-12-02 20:43:57] 2025-12-02 19:43:30,584 | INFO | all data control threads have been joined [2025-12-02 20:43:57] 2025-12-02 19:43:30,695 | INFO | [job] retrieve thread has finished [2025-12-02 20:43:57] 2025-12-02 19:43:30,696 | INFO | [job] queue monitor thread has finished [2025-12-02 20:43:57] 2025-12-02 19:43:30,715 | INFO | [job] create_data_payload thread has finished [2025-12-02 20:43:57] 2025-12-02 19:43:30,806 | INFO | [payload] validate_pre thread has finished [2025-12-02 20:43:57] 2025-12-02 19:43:30,890 | INFO | [job] control thread has finished [2025-12-02 20:43:57] 2025-12-02 19:43:30,943 | INFO | [job] job monitor thread has finished [2025-12-02 20:43:57] 2025-12-02 19:43:31,012 | INFO | [data] copytool_in thread has finished [2025-12-02 20:43:57] 2025-12-02 19:43:31,123 | INFO | all payload control threads have been joined [2025-12-02 20:43:57] 2025-12-02 19:43:31,208 | INFO | [payload] validate_post thread has finished [2025-12-02 20:43:57] 2025-12-02 19:43:31,224 | WARNING | data:queue_monitoring:received graceful stop - abort after this iteration [2025-12-02 20:43:57] 2025-12-02 19:43:31,590 | INFO | [data] control thread has finished [2025-12-02 20:43:57] 2025-12-02 19:43:31,777 | INFO | [payload] execute_payloads thread has finished [2025-12-02 20:43:57] 2025-12-02 19:43:31,816 | INFO | [payload] failed_post thread has finished [2025-12-02 20:43:57] 2025-12-02 19:43:32,127 | INFO | [job] validate thread has finished [2025-12-02 20:43:57] 2025-12-02 19:43:32,129 | INFO | [payload] control thread has finished [2025-12-02 20:43:57] 2025-12-02 19:43:32,215 | INFO | [data] copytool_out thread has finished [2025-12-02 20:43:57] 2025-12-02 19:43:35,231 | INFO | [data] queue_monitor thread has finished [2025-12-02 20:43:57] 2025-12-02 19:43:40,042 | INFO | job.realtimelogging is not enabled [2025-12-02 20:43:57] 2025-12-02 19:43:41,048 | INFO | [payload] run_realtimelog thread has finished [2025-12-02 20:43:57] 2025-12-02 19:43:51,796 | INFO | [monitor] cgroup control has ended [2025-12-02 20:43:57] 2025-12-02 19:43:51,811 | INFO | only monitor.control thread still running - safe to abort: ['<_MainThread(MainThread, started 140151612081984)>', '<ExcThread(monitor, started 140150713591552)>'] [2025-12-02 20:43:57] 2025-12-02 19:43:52,809 | WARNING | job_aborted has been set - aborting pilot monitoring [2025-12-02 20:43:57] 2025-12-02 19:43:52,809 | INFO | [monitor] control thread has ended [2025-12-02 20:43:57] 2025-12-02 19:43:56,837 | INFO | all workflow threads have been joined [2025-12-02 20:43:57] 2025-12-02 19:43:56,837 | INFO | end of generic workflow (traces error code: 0) [2025-12-02 20:43:57] 2025-12-02 19:43:56,837 | INFO | traces error code: 0 [2025-12-02 20:43:57] 2025-12-02 19:43:56,837 | INFO | pilot has finished (exit code=0, shell exit code=0) [2025-12-02 20:43:57] 2025-12-02 19:43:56,904 [wrapper] ==== pilot stdout END ==== [2025-12-02 20:43:57] 2025-12-02 19:43:56,908 [wrapper] ==== wrapper stdout RESUME ==== [2025-12-02 20:43:57] 2025-12-02 19:43:56,912 [wrapper] pilotpid: 1797771 [2025-12-02 20:43:57] 2025-12-02 19:43:56,916 [wrapper] Pilot exit status: 0 [2025-12-02 20:43:57] 2025-12-02 19:43:56,926 [wrapper] pandaids: 6893852283 [2025-12-02 20:43:57] 2025-12-02 19:43:56,947 [wrapper] cleanup supervisor_pilot 1849043 1797772 [2025-12-02 20:43:57] 2025-12-02 19:43:56,950 [wrapper] Test setup, not cleaning [2025-12-02 20:43:57] 2025-12-02 19:43:56,954 [wrapper] apfmon messages muted [2025-12-02 20:43:57] 2025-12-02 19:43:56,957 [wrapper] ==== wrapper stdout END ==== [2025-12-02 20:43:57] 2025-12-02 19:43:56,960 [wrapper] ==== wrapper stderr END ==== [2025-12-02 20:43:57] *** Error codes and diagnostics *** [2025-12-02 20:43:57] "exeErrorCode": 0, [2025-12-02 20:43:57] "exeErrorDiag": "", [2025-12-02 20:43:57] "pilotErrorCode": 0, [2025-12-02 20:43:57] "pilotErrorDiag": "", [2025-12-02 20:43:57] *** Listing of results directory *** [2025-12-02 20:43:57] total 1315104 [2025-12-02 20:43:57] -rw-r--r-- 1 boinc boinc 576529 Nov 20 12:54 pilot3.tar.gz [2025-12-02 20:43:57] -rwx------ 1 boinc boinc 36292 Nov 20 12:56 runpilot2-wrapper.sh [2025-12-02 20:43:57] -rw-r--r-- 1 boinc boinc 5112 Nov 20 12:56 queuedata.json [2025-12-02 20:43:57] -rw-r--r-- 1 boinc boinc 100 Dec 2 13:49 wrapper_26015_x86_64-pc-linux-gnu [2025-12-02 20:43:57] -rwxr-xr-x 1 boinc boinc 7986 Dec 2 13:49 run_atlas [2025-12-02 20:43:57] -rw-r--r-- 1 boinc boinc 105 Dec 2 13:49 job.xml [2025-12-02 20:43:57] -rw-r--r-- 2 boinc boinc 408855506 Dec 2 13:49 EVNT.47493127._000377.pool.root.1 [2025-12-02 20:43:57] -rw-r--r-- 2 boinc boinc 590100 Dec 2 13:49 input.tar.gz [2025-12-02 20:43:57] -rw-r--r-- 2 boinc boinc 15847 Dec 2 13:49 start_atlas.sh [2025-12-02 20:43:57] drwxrwx--x 2 boinc boinc 4096 Dec 2 13:49 shared [2025-12-02 20:43:57] -rw-r--r-- 1 boinc boinc 0 Dec 2 13:49 boinc_setup_complete [2025-12-02 20:43:57] -rw-r--r-- 1 boinc boinc 6369 Dec 2 13:49 init_data.xml [2025-12-02 20:43:57] -rw-r--r-- 1 boinc boinc 0 Dec 2 13:49 boinc_lockfile [2025-12-02 20:43:57] -rw-r--r-- 1 boinc boinc 2786 Dec 2 13:49 pandaJob.out [2025-12-02 20:43:57] -rw------- 1 boinc boinc 986791 Dec 2 13:49 agis_schedconf.cvmfs.json [2025-12-02 20:43:57] drwx------ 5 boinc boinc 4096 Dec 2 13:49 pilot3 [2025-12-02 20:43:57] -rw-r--r-- 1 boinc boinc 531 Dec 2 20:39 boinc_task_state.xml [2025-12-02 20:43:57] -rw------- 1 boinc boinc 95 Dec 2 20:42 pilot_heartbeat.json [2025-12-02 20:43:57] -rw------- 1 boinc boinc 924381337 Dec 2 20:42 HITS.47523449._003762.pool.root.1 [2025-12-02 20:43:57] -rw------- 1 boinc boinc 1028 Dec 2 20:43 memory_monitor_summary.json [2025-12-02 20:43:57] -rw------- 1 boinc boinc 1516240 Dec 2 20:43 agis_ddmendpoints.agis.ALL.json [2025-12-02 20:43:57] -rw------- 1 boinc boinc 1259948 Dec 2 20:43 log.47523449._003762.job.log.tgz.1 [2025-12-02 20:43:57] -rw-r--r-- 1 boinc boinc 28 Dec 2 20:43 wrapper_checkpoint.txt [2025-12-02 20:43:57] -rw------- 1 boinc boinc 8583 Dec 2 20:43 heartbeat.json [2025-12-02 20:43:57] -rw------- 1 boinc boinc 4814 Dec 2 20:43 pilotlog.txt [2025-12-02 20:43:57] -rw-r--r-- 1 boinc boinc 8192 Dec 2 20:43 boinc_mmap_file [2025-12-02 20:43:57] -rw------- 1 boinc boinc 3505725 Dec 2 20:43 log.47523449._003762.job.log.1 [2025-12-02 20:43:57] -rw------- 1 boinc boinc 353 Dec 2 20:43 output.list [2025-12-02 20:43:57] -rw-r--r-- 1 boinc boinc 620 Dec 2 20:43 runtime_log [2025-12-02 20:43:57] -rw------- 1 boinc boinc 4782080 Dec 2 20:43 result.tar.gz [2025-12-02 20:43:57] -rw-r--r-- 1 boinc boinc 8898 Dec 2 20:43 runtime_log.err [2025-12-02 20:43:57] -rw------- 1 boinc boinc 665 Dec 2 20:43 CC3NDmqRIg8n9Rq4apOajLDm4fhM0noT9bVo0NGKDmBaGKDmq10Bwm.diag [2025-12-02 20:43:57] -rw-r--r-- 1 boinc boinc 11487 Dec 2 20:43 stderr.txt [2025-12-02 20:43:57] HITS file was successfully produced: [2025-12-02 20:43:57] -rw------- 1 boinc boinc 924381337 Dec 2 20:42 shared/HITS.pool.root.1 [2025-12-02 20:43:57] *** Contents of shared directory: *** [2025-12-02 20:43:57] total 1307272 [2025-12-02 20:43:57] -rw-r--r-- 2 boinc boinc 408855506 Dec 2 13:49 ATLAS.root_0 [2025-12-02 20:43:57] -rw-r--r-- 2 boinc boinc 590100 Dec 2 13:49 input.tar.gz [2025-12-02 20:43:57] -rw-r--r-- 2 boinc boinc 15847 Dec 2 13:49 start_atlas.sh [2025-12-02 20:43:57] -rw------- 1 boinc boinc 924381337 Dec 2 20:42 HITS.pool.root.1 [2025-12-02 20:43:57] -rw------- 1 boinc boinc 4782080 Dec 2 20:43 result.tar.gz 20:43:58 (1793977): run_atlas exited; CPU time 97303.560892 20:43:58 (1793977): called boinc_finish(0) </stderr_txt> ]]>
©2025 CERN