Name | 6c6MDm6CXg7nsSi4ap6QjLDmwznN0nGgGQJmUXzaDmOjhKDmjY2FSm_1 |
Workunit | 232800425 |
Created | 3 Jun 2025, 5:04:14 UTC |
Sent | 3 Jun 2025, 6:23:01 UTC |
Report deadline | 11 Jun 2025, 6:23:01 UTC |
Received | 3 Jun 2025, 14:25:01 UTC |
Server state | Over |
Outcome | Success |
Client state | Done |
Exit status | 0 (0x00000000) |
Computer ID | 10869376 |
Run time | 1 hours 39 min 21 sec |
CPU time | 12 hours 21 min 51 sec |
Validate state | Valid |
Credit | 174.85 |
Device peak FLOPS | 20.57 GFLOPS |
Application version | ATLAS Simulation v3.01 (native_mt) x86_64-pc-linux-gnu |
Peak working set size | 2.66 GB |
Peak swap size | 3.04 GB |
Peak disk usage | 591.85 MB |
<core_client_version>8.0.2</core_client_version> <![CDATA[ <stderr_txt> 11:41:05 (236983): wrapper (7.7.26015): starting 11:41:05 (236983): wrapper: running run_atlas (--nthreads 8) [2025-06-03 11:41:05] Arguments: --nthreads 8 [2025-06-03 11:41:05] Threads: 8 [2025-06-03 11:41:05] Checking for CVMFS [2025-06-03 11:41:05] Probing /cvmfs/atlas.cern.ch... OK [2025-06-03 11:41:05] Probing /cvmfs/atlas-condb.cern.ch... OK [2025-06-03 11:41:05] Running cvmfs_config stat atlas.cern.ch [2025-06-03 11:41:05] VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE [2025-06-03 11:41:05] 2.12.7.0 20002 777 54852 146741 3 56 3220070 4096001 0 130560 0 595342 99.989 228731 8792 http://s1fnal-cvmfs.openhtc.io:8080/cvmfs/atlas.cern.ch DIRECT 1 [2025-06-03 11:41:05] CVMFS is ok [2025-06-03 11:41:05] Efficiency of ATLAS tasks can be improved by the following measure(s): [2025-06-03 11:41:05] Small home clusters do not require a local http proxy but it is suggested if [2025-06-03 11:41:05] more than 10 cores throughout the same LAN segment are regularly running ATLAS like tasks. [2025-06-03 11:41:05] Further information can be found at the LHC@home message board. [2025-06-03 11:41:05] Using apptainer image /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 [2025-06-03 11:41:05] Checking for apptainer binary... [2025-06-03 11:41:05] apptainer is not installed, using version from CVMFS [2025-06-03 11:41:05] Checking apptainer works with /cvmfs/atlas.cern.ch/repo/containers/sw/apptainer/x86_64-el7/current/bin/apptainer exec -B /cvmfs /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 hostname [2025-06-03 11:41:06] vm-zhiwei-ec8ce3 [2025-06-03 11:41:06] apptainer works [2025-06-03 11:41:06] Set ATHENA_PROC_NUMBER=8 [2025-06-03 11:41:06] Set ATHENA_CORE_NUMBER=8 [2025-06-03 11:41:06] Starting ATLAS job with PandaID=6677454664 [2025-06-03 11:41:06] Running command: /cvmfs/atlas.cern.ch/repo/containers/sw/apptainer/x86_64-el7/current/bin/apptainer exec -B /cvmfs,/var/lib/boinc/slots/0 /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 sh start_atlas.sh [2025-06-03 13:20:22] *** The last 200 lines of the pilot log: *** [2025-06-03 13:20:22] protocol_id=None [2025-06-03 13:20:22] protocols=[{'endpoint': 'davs://dav.ndgf.org:443', 'flavour': 'WEBDAV', 'id': 331, 'path': '/atlas/disk/atlasdatadisk/rucio/'}] [2025-06-03 13:20:22] replicas=None [2025-06-03 13:20:22] scope=mc23_13p6TeV [2025-06-03 13:20:22] status=None [2025-06-03 13:20:22] status_code=0 [2025-06-03 13:20:22] storage_token= [2025-06-03 13:20:22] surl=/var/lib/boinc/slots/0/PanDA_Pilot-6677454664/log.45006595._018119.job.log.tgz.1 [2025-06-03 13:20:22] turl=davs://dav.ndgf.org:443/atlas/disk/atlasdatadisk/rucio/mc23_13p6TeV/d2/78/log.45006595._018119.job.log.tgz.1 [2025-06-03 13:20:22] workdir=None [2025-06-03 13:20:22] ] [2025-06-03 13:20:22] 2025-06-03 13:19:51,822 | INFO | transferring file log.45006595._018119.job.log.tgz.1 from /var/lib/boinc/slots/0/PanDA_Pilot-6677454664/log.45006595._018119.job.log.tgz.1 to /var/lib/boinc/slots/ [2025-06-03 13:20:22] 2025-06-03 13:19:51,822 | INFO | executing command: /usr/bin/env mv /var/lib/boinc/slots/0/PanDA_Pilot-6677454664/log.45006595._018119.job.log.tgz.1 /var/lib/boinc/slots/0/log.45006595._018119.job [2025-06-03 13:20:22] 2025-06-03 13:19:51,841 | INFO | adding to output.list: log.45006595._018119.job.log.tgz.1 davs://dav.ndgf.org:443/atlas/disk/atlasdatadisk/rucio/mc23_13p6TeV/d2/78/log.45006595._018119.job.log.tg [2025-06-03 13:20:22] 2025-06-03 13:19:51,841 | INFO | alt stage-out settings: ['pl', 'write_lan', 'w', 'default'], allow_altstageout=False, remain_files=0, has_altstorage=True [2025-06-03 13:20:22] 2025-06-03 13:19:51,842 | INFO | summary of transferred files: [2025-06-03 13:20:22] 2025-06-03 13:19:51,842 | INFO | -- lfn=log.45006595._018119.job.log.tgz.1, status_code=0, status=transferred [2025-06-03 13:20:22] 2025-06-03 13:19:51,842 | INFO | stage-out finished correctly [2025-06-03 13:20:22] 2025-06-03 13:19:51,906 | INFO | monitor loop #433: job 0:6677454664 is in state 'finished' [2025-06-03 13:20:22] 2025-06-03 13:19:51,907 | INFO | will abort job monitoring soon since job state=finished (job is still in queue) [2025-06-03 13:20:22] 2025-06-03 13:19:53,698 | INFO | finished stage-out for finished payload, adding job to finished_jobs queue [2025-06-03 13:20:22] 2025-06-03 13:19:54,410 | INFO | monitor loop #434: job 0:6677454664 is in state 'finished' [2025-06-03 13:20:22] 2025-06-03 13:19:54,411 | INFO | will abort job monitoring soon since job state=finished (job is still in queue) [2025-06-03 13:20:22] 2025-06-03 13:19:55,965 | INFO | job 6677454664 has state=finished [2025-06-03 13:20:22] 2025-06-03 13:19:55,965 | INFO | preparing for final server update for job 6677454664 in state='finished' [2025-06-03 13:20:22] 2025-06-03 13:19:55,967 | INFO | this job has now completed (state=finished) [2025-06-03 13:20:22] 2025-06-03 13:19:55,967 | INFO | pilot will not update the server (heartbeat message will be written to file) [2025-06-03 13:20:22] 2025-06-03 13:19:55,967 | INFO | log transfer has been attempted: DONE [2025-06-03 13:20:22] 2025-06-03 13:19:55,968 | INFO | job 6677454664 has finished - writing final server update [2025-06-03 13:20:22] 2025-06-03 13:19:55,968 | INFO | total number of processed events: 400 (read) [2025-06-03 13:20:22] 2025-06-03 13:19:55,975 | INFO | executing command: lscpu [2025-06-03 13:20:22] 2025-06-03 13:19:56,017 | INFO | executing command: export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase;source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh --quiet;lsetup [2025-06-03 13:20:22] 2025-06-03 13:19:56,914 | INFO | monitor loop #435: job 0:6677454664 is in state 'finished' [2025-06-03 13:20:22] 2025-06-03 13:19:56,915 | INFO | will abort job monitoring soon since job state=finished (job is still in queue) [2025-06-03 13:20:22] 2025-06-03 13:19:59,419 | INFO | monitor loop #436: job 0:6677454664 is in state 'finished' [2025-06-03 13:20:22] 2025-06-03 13:19:59,420 | INFO | will abort job monitoring soon since job state=finished (job is still in queue) [2025-06-03 13:20:22] 2025-06-03 13:20:01,923 | INFO | monitor loop #437: job 0:6677454664 is in state 'finished' [2025-06-03 13:20:22] 2025-06-03 13:20:01,923 | INFO | will abort job monitoring soon since job state=finished (job is still in queue) [2025-06-03 13:20:22] 2025-06-03 13:20:02,897 | INFO | CPU arch script returned: x86-64-v2 [2025-06-03 13:20:22] 2025-06-03 13:20:02,898 | INFO | using path: /var/lib/boinc/slots/0/PanDA_Pilot-6677454664/memory_monitor_summary.json (trf name=prmon) [2025-06-03 13:20:22] 2025-06-03 13:20:02,901 | INFO | extracted standard info from prmon json [2025-06-03 13:20:22] 2025-06-03 13:20:02,901 | INFO | extracted standard memory fields from prmon json [2025-06-03 13:20:22] 2025-06-03 13:20:02,902 | WARNING | GPU info not found in prmon json: 'gpu' [2025-06-03 13:20:22] 2025-06-03 13:20:02,903 | WARNING | format EVNTtoHITS has no such key: dbData [2025-06-03 13:20:22] 2025-06-03 13:20:02,903 | WARNING | format EVNTtoHITS has no such key: dbTime [2025-06-03 13:20:22] 2025-06-03 13:20:02,908 | INFO | fitting pss+swap vs Time [2025-06-03 13:20:22] 2025-06-03 13:20:02,909 | INFO | sum of square deviations: 226022842.5 [2025-06-03 13:20:22] 2025-06-03 13:20:02,910 | INFO | sum of deviations: -17713535843.5 [2025-06-03 13:20:22] 2025-06-03 13:20:02,910 | INFO | mean x: 1748953936.5 [2025-06-03 13:20:22] 2025-06-03 13:20:02,910 | INFO | mean y: 2668667.8777777776 [2025-06-03 13:20:22] 2025-06-03 13:20:02,911 | INFO | -- intersect: 137069161152.2521 [2025-06-03 13:20:22] 2025-06-03 13:20:02,911 | INFO | intersect: 137069161152.2521 [2025-06-03 13:20:22] 2025-06-03 13:20:02,911 | INFO | chi2: 0.16485640540503224 [2025-06-03 13:20:22] 2025-06-03 13:20:02,911 | INFO | -- intersect: 137069161152.2521 [2025-06-03 13:20:22] 2025-06-03 13:20:02,911 | INFO | current memory leak: -78.37 B/s (using 90 data points, chi2=0.16) [2025-06-03 13:20:22] 2025-06-03 13:20:02,912 | INFO | .............................. [2025-06-03 13:20:22] 2025-06-03 13:20:02,912 | INFO | . Timing measurements: [2025-06-03 13:20:22] 2025-06-03 13:20:02,912 | INFO | . get job = 0 s [2025-06-03 13:20:22] 2025-06-03 13:20:02,912 | INFO | . initial setup = 3 s [2025-06-03 13:20:22] 2025-06-03 13:20:02,912 | INFO | . payload setup = 12 s [2025-06-03 13:20:22] 2025-06-03 13:20:02,912 | INFO | . stage-in = 0 s [2025-06-03 13:20:22] 2025-06-03 13:20:02,912 | INFO | . payload execution = 5872 s [2025-06-03 13:20:22] 2025-06-03 13:20:02,912 | INFO | . stage-out = 0 s [2025-06-03 13:20:22] 2025-06-03 13:20:02,912 | INFO | . log creation = 0 s [2025-06-03 13:20:22] 2025-06-03 13:20:02,913 | INFO | .............................. [2025-06-03 13:20:22] 2025-06-03 13:20:02,978 | INFO | [2025-06-03 13:20:22] 2025-06-03 13:20:02,979 | INFO | job summary report [2025-06-03 13:20:22] 2025-06-03 13:20:02,979 | INFO | -------------------------------------------------- [2025-06-03 13:20:22] 2025-06-03 13:20:02,980 | INFO | PanDA job id: 6677454664 [2025-06-03 13:20:22] 2025-06-03 13:20:02,980 | INFO | task id: 45006595 [2025-06-03 13:20:22] 2025-06-03 13:20:02,981 | INFO | errors: (none) [2025-06-03 13:20:22] 2025-06-03 13:20:02,981 | INFO | status: LOG_TRANSFER = DONE [2025-06-03 13:20:22] 2025-06-03 13:20:02,981 | INFO | pilot state: finished [2025-06-03 13:20:22] 2025-06-03 13:20:02,982 | INFO | transexitcode: 0 [2025-06-03 13:20:22] 2025-06-03 13:20:02,982 | INFO | exeerrorcode: 0 [2025-06-03 13:20:22] 2025-06-03 13:20:02,982 | INFO | exeerrordiag: [2025-06-03 13:20:22] 2025-06-03 13:20:02,983 | INFO | exitcode: 0 [2025-06-03 13:20:22] 2025-06-03 13:20:02,983 | INFO | exitmsg: OK [2025-06-03 13:20:22] 2025-06-03 13:20:02,983 | INFO | cpuconsumptiontime: 44456 s [2025-06-03 13:20:22] 2025-06-03 13:20:02,983 | INFO | nevents: 400 [2025-06-03 13:20:22] 2025-06-03 13:20:02,984 | INFO | neventsw: 0 [2025-06-03 13:20:22] 2025-06-03 13:20:02,984 | INFO | pid: 247219 [2025-06-03 13:20:22] 2025-06-03 13:20:02,984 | INFO | pgrp: 247219 [2025-06-03 13:20:22] 2025-06-03 13:20:02,985 | INFO | corecount: 8 [2025-06-03 13:20:22] 2025-06-03 13:20:02,985 | INFO | event service: False [2025-06-03 13:20:22] 2025-06-03 13:20:02,985 | INFO | sizes: {0: 2405011, 1: 2405210, 7: 2405416, 11: 2405444, 5888: 2431909, 5890: 2440850, 5892: 2441034, 5901: 2441204} [2025-06-03 13:20:22] 2025-06-03 13:20:02,985 | INFO | -------------------------------------------------- [2025-06-03 13:20:22] 2025-06-03 13:20:02,986 | INFO | [2025-06-03 13:20:22] 2025-06-03 13:20:02,986 | INFO | executing command: ls -lF /var/lib/boinc/slots/0 [2025-06-03 13:20:22] 2025-06-03 13:20:03,024 | INFO | queue jobs had 0 job(s) [purged] [2025-06-03 13:20:22] 2025-06-03 13:20:03,025 | INFO | queue payloads had 0 job(s) [purged] [2025-06-03 13:20:22] 2025-06-03 13:20:03,025 | INFO | queue data_in had 0 job(s) [purged] [2025-06-03 13:20:22] 2025-06-03 13:20:03,025 | INFO | queue data_out had 0 job(s) [purged] [2025-06-03 13:20:22] 2025-06-03 13:20:03,026 | INFO | queue current_data_in had 0 job(s) [purged] [2025-06-03 13:20:22] 2025-06-03 13:20:03,026 | INFO | queue validated_jobs had 0 job(s) [purged] [2025-06-03 13:20:22] 2025-06-03 13:20:03,026 | INFO | queue validated_payloads had 0 job(s) [purged] [2025-06-03 13:20:22] 2025-06-03 13:20:03,027 | INFO | queue monitored_payloads had 0 job(s) [purged] [2025-06-03 13:20:22] 2025-06-03 13:20:03,027 | INFO | queue finished_jobs had 0 job(s) [purged] [2025-06-03 13:20:22] 2025-06-03 13:20:03,027 | INFO | queue finished_payloads had 0 job(s) [purged] [2025-06-03 13:20:22] 2025-06-03 13:20:03,028 | INFO | queue finished_data_in had 1 job(s) [purged] [2025-06-03 13:20:22] 2025-06-03 13:20:03,028 | INFO | queue finished_data_out had 0 job(s) [purged] [2025-06-03 13:20:22] 2025-06-03 13:20:03,028 | INFO | queue failed_jobs had 0 job(s) [purged] [2025-06-03 13:20:22] 2025-06-03 13:20:03,029 | INFO | queue failed_payloads had 0 job(s) [purged] [2025-06-03 13:20:22] 2025-06-03 13:20:03,029 | INFO | queue failed_data_in had 0 job(s) [purged] [2025-06-03 13:20:22] 2025-06-03 13:20:03,029 | INFO | queue failed_data_out had 0 job(s) [purged] [2025-06-03 13:20:22] 2025-06-03 13:20:03,030 | INFO | queue completed_jobs had 0 job(s) [purged] [2025-06-03 13:20:22] 2025-06-03 13:20:03,030 | INFO | queue completed_jobids has 1 job(s) [2025-06-03 13:20:22] 2025-06-03 13:20:03,030 | INFO | queue realtimelog_payloads had 0 job(s) [purged] [2025-06-03 13:20:22] 2025-06-03 13:20:03,031 | INFO | queue messages had 0 job(s) [purged] [2025-06-03 13:20:22] 2025-06-03 13:20:03,031 | INFO | job 6677454664 has completed (purged errors) [2025-06-03 13:20:22] 2025-06-03 13:20:03,032 | INFO | overall cleanup function is called [2025-06-03 13:20:22] 2025-06-03 13:20:04,045 | INFO | --- collectZombieJob: --- 10, [247219] [2025-06-03 13:20:22] 2025-06-03 13:20:04,045 | INFO | zombie collector waiting for pid 247219 [2025-06-03 13:20:22] 2025-06-03 13:20:04,046 | INFO | harmless exception when collecting zombies: [Errno 10] No child processes [2025-06-03 13:20:22] 2025-06-03 13:20:04,046 | INFO | collected zombie processes [2025-06-03 13:20:22] 2025-06-03 13:20:04,047 | INFO | will attempt to kill all subprocesses of pid=247219 [2025-06-03 13:20:22] 2025-06-03 13:20:04,174 | INFO | process IDs to be killed: [247219] (in reverse order) [2025-06-03 13:20:22] 2025-06-03 13:20:04,227 | WARNING | found no corresponding commands to process id(s) [2025-06-03 13:20:22] 2025-06-03 13:20:04,228 | INFO | Do not look for orphan processes in BOINC jobs [2025-06-03 13:20:22] 2025-06-03 13:20:04,233 | INFO | did not find any defunct processes belonging to 247219 [2025-06-03 13:20:22] 2025-06-03 13:20:04,237 | INFO | did not find any defunct processes belonging to 247219 [2025-06-03 13:20:22] 2025-06-03 13:20:04,237 | INFO | ready for new job [2025-06-03 13:20:22] 2025-06-03 13:20:04,238 | INFO | pilot has finished with previous job - re-establishing logging [2025-06-03 13:20:22] 2025-06-03 13:20:04,242 | INFO | ************************************** [2025-06-03 13:20:22] 2025-06-03 13:20:04,242 | INFO | *** PanDA Pilot version 3.10.2.2 *** [2025-06-03 13:20:22] 2025-06-03 13:20:04,242 | INFO | ************************************** [2025-06-03 13:20:22] 2025-06-03 13:20:04,243 | INFO | [2025-06-03 13:20:22] 2025-06-03 13:20:04,244 | INFO | pilot is running in a VM [2025-06-03 13:20:22] 2025-06-03 13:20:04,244 | INFO | architecture information: [2025-06-03 13:20:22] 2025-06-03 13:20:04,245 | INFO | executing command: cat /etc/os-release [2025-06-03 13:20:22] 2025-06-03 13:20:04,267 | INFO | cat /etc/os-release: [2025-06-03 13:20:22] NAME="CentOS Linux" [2025-06-03 13:20:22] VERSION="7 (Core)" [2025-06-03 13:20:22] ID="centos" [2025-06-03 13:20:22] ID_LIKE="rhel fedora" [2025-06-03 13:20:22] VERSION_ID="7" [2025-06-03 13:20:22] PRETTY_NAME="CentOS Linux 7 (Core)" [2025-06-03 13:20:22] ANSI_COLOR="0;31" [2025-06-03 13:20:22] CPE_NAME="cpe:/o:centos:centos:7" [2025-06-03 13:20:22] HOME_URL="https://www.centos.org/" [2025-06-03 13:20:22] BUG_REPORT_URL="https://bugs.centos.org/" [2025-06-03 13:20:22] [2025-06-03 13:20:22] CENTOS_MANTISBT_PROJECT="CentOS-7" [2025-06-03 13:20:22] CENTOS_MANTISBT_PROJECT_VERSION="7" [2025-06-03 13:20:22] REDHAT_SUPPORT_PRODUCT="centos" [2025-06-03 13:20:22] REDHAT_SUPPORT_PRODUCT_VERSION="7" [2025-06-03 13:20:22] [2025-06-03 13:20:22] 2025-06-03 13:20:04,268 | INFO | ************************************** [2025-06-03 13:20:22] 2025-06-03 13:20:04,771 | INFO | executing command: df -mP /var/lib/boinc/slots/0 [2025-06-03 13:20:22] 2025-06-03 13:20:04,801 | INFO | sufficient remaining disk space (31085035520 B) [2025-06-03 13:20:22] 2025-06-03 13:20:04,802 | WARNING | since timefloor is set to 0, pilot was only allowed to run one job [2025-06-03 13:20:22] 2025-06-03 13:20:04,802 | INFO | current server update state: UPDATING_FINAL [2025-06-03 13:20:22] 2025-06-03 13:20:04,803 | INFO | update_server=False [2025-06-03 13:20:22] 2025-06-03 13:20:04,803 | WARNING | setting graceful_stop since proceed_with_getjob() returned False (pilot will end) [2025-06-03 13:20:22] 2025-06-03 13:20:04,804 | WARNING | job:queue_monitor:received graceful stop - abort after this iteration [2025-06-03 13:20:22] 2025-06-03 13:20:04,804 | WARNING | data:queue_monitoring:received graceful stop - abort after this iteration [2025-06-03 13:20:22] 2025-06-03 13:20:04,805 | WARNING | aborting monitor loop since graceful_stop has been set (timing out remaining threads) [2025-06-03 13:20:22] 2025-06-03 13:20:04,806 | INFO | found 0 job(s) in 20 queues [2025-06-03 13:20:22] 2025-06-03 13:20:04,807 | WARNING | pilot monitor received instruction that args.graceful_stop has been set [2025-06-03 13:20:22] 2025-06-03 13:20:04,807 | WARNING | will wait for a maximum of 300 s for threads to finish [2025-06-03 13:20:22] 2025-06-03 13:20:04,914 | WARNING | data:copytool_out:received graceful stop - abort after this iteration [2025-06-03 13:20:22] 2025-06-03 13:20:05,393 | INFO | all payload control threads have been joined [2025-06-03 13:20:22] 2025-06-03 13:20:05,437 | WARNING | job:job_monitor:received graceful stop - abort after this iteration [2025-06-03 13:20:22] 2025-06-03 13:20:05,437 | INFO | aborting loop [2025-06-03 13:20:22] 2025-06-03 13:20:05,615 | INFO | all data control threads have been joined [2025-06-03 13:20:22] 2025-06-03 13:20:05,696 | INFO | all job control threads have been joined [2025-06-03 13:20:22] 2025-06-03 13:20:05,809 | INFO | [job] retrieve thread has finished [2025-06-03 13:20:22] 2025-06-03 13:20:05,810 | INFO | [job] queue monitor thread has finished [2025-06-03 13:20:22] 2025-06-03 13:20:05,913 | INFO | [payload] failed_post thread has finished [2025-06-03 13:20:22] 2025-06-03 13:20:06,043 | INFO | [payload] execute_payloads thread has finished [2025-06-03 13:20:22] 2025-06-03 13:20:06,350 | INFO | [data] copytool_in thread has finished [2025-06-03 13:20:22] 2025-06-03 13:20:06,385 | INFO | [job] validate thread has finished [2025-06-03 13:20:22] 2025-06-03 13:20:06,395 | INFO | [payload] control thread has finished [2025-06-03 13:20:22] 2025-06-03 13:20:06,443 | INFO | [job] job monitor thread has finished [2025-06-03 13:20:22] 2025-06-03 13:20:06,621 | INFO | [data] control thread has finished [2025-06-03 13:20:22] 2025-06-03 13:20:06,702 | INFO | [job] control thread has finished [2025-06-03 13:20:22] 2025-06-03 13:20:06,869 | INFO | [job] create_data_payload thread has finished [2025-06-03 13:20:22] 2025-06-03 13:20:06,886 | INFO | [payload] validate_pre thread has finished [2025-06-03 13:20:22] 2025-06-03 13:20:06,920 | INFO | [data] copytool_out thread has finished [2025-06-03 13:20:22] 2025-06-03 13:20:07,185 | INFO | [payload] validate_post thread has finished [2025-06-03 13:20:22] 2025-06-03 13:20:08,806 | INFO | [data] queue_monitor thread has finished [2025-06-03 13:20:22] 2025-06-03 13:20:15,808 | INFO | job.realtimelogging is not enabled [2025-06-03 13:20:22] 2025-06-03 13:20:16,815 | INFO | [payload] run_realtimelog thread has finished [2025-06-03 13:20:22] 2025-06-03 13:20:16,845 | INFO | only monitor.control thread still running - safe to abort: ['<_MainThread(MainThread, started 140279745197888)>', '<ExcThread(monitor, started 140278765696768)>'] [2025-06-03 13:20:22] 2025-06-03 13:20:16,869 | WARNING | job_aborted has been set - aborting pilot monitoring [2025-06-03 13:20:22] 2025-06-03 13:20:16,870 | INFO | [monitor] control thread has ended [2025-06-03 13:20:22] 2025-06-03 13:20:21,871 | INFO | all workflow threads have been joined [2025-06-03 13:20:22] 2025-06-03 13:20:21,872 | INFO | end of generic workflow (traces error code: 0) [2025-06-03 13:20:22] 2025-06-03 13:20:21,873 | INFO | traces error code: 0 [2025-06-03 13:20:22] 2025-06-03 13:20:21,873 | INFO | pilot has finished (exit code=0, shell exit code=0) [2025-06-03 13:20:22] 2025-06-03 13:20:22,011 [wrapper] ==== pilot stdout END ==== [2025-06-03 13:20:22] 2025-06-03 13:20:22,015 [wrapper] ==== wrapper stdout RESUME ==== [2025-06-03 13:20:22] 2025-06-03 13:20:22,021 [wrapper] pilotpid: 240660 [2025-06-03 13:20:22] 2025-06-03 13:20:22,028 [wrapper] Pilot exit status: 0 [2025-06-03 13:20:22] 2025-06-03 13:20:22,052 [wrapper] pandaids: 6677454664 [2025-06-03 13:20:22] 2025-06-03 13:20:22,105 [wrapper] cleanup supervisor_pilot 258789 240661 [2025-06-03 13:20:22] 2025-06-03 13:20:22,111 [wrapper] Test setup, not cleaning [2025-06-03 13:20:22] 2025-06-03 13:20:22,117 [wrapper] apfmon messages muted [2025-06-03 13:20:22] 2025-06-03 13:20:22,123 [wrapper] ==== wrapper stdout END ==== [2025-06-03 13:20:22] 2025-06-03 13:20:22,129 [wrapper] ==== wrapper stderr END ==== [2025-06-03 13:20:22] *** Error codes and diagnostics *** [2025-06-03 13:20:22] "exeErrorCode": 0, [2025-06-03 13:20:22] "exeErrorDiag": "", [2025-06-03 13:20:22] "pilotErrorCode": 0, [2025-06-03 13:20:22] "pilotErrorDiag": "", [2025-06-03 13:20:22] *** Listing of results directory *** [2025-06-03 13:20:22] total 391512 [2025-06-03 13:20:22] -rw-r--r-- 1 boinc boinc 495897 Jun 3 02:50 pilot3.tar.gz [2025-06-03 13:20:22] -rw-r--r-- 1 boinc boinc 5111 Jun 3 03:08 queuedata.json [2025-06-03 13:20:22] -rwx------ 1 boinc boinc 37140 Jun 3 03:09 runpilot2-wrapper.sh [2025-06-03 13:20:22] -rw-r--r-- 1 boinc boinc 100 Jun 3 11:41 wrapper_26015_x86_64-pc-linux-gnu [2025-06-03 13:20:22] -rwxr-xr-x 1 boinc boinc 7986 Jun 3 11:41 run_atlas [2025-06-03 13:20:22] -rw-r--r-- 1 boinc boinc 105 Jun 3 11:41 job.xml [2025-06-03 13:20:22] -rw-r--r-- 1 boinc boinc 6318 Jun 3 11:41 init_data.xml [2025-06-03 13:20:22] -rw-r--r-- 2 boinc boinc 216516440 Jun 3 11:41 EVNT.45006593._000725.pool.root.1 [2025-06-03 13:20:22] -rw-r--r-- 2 boinc boinc 15093 Jun 3 11:41 start_atlas.sh [2025-06-03 13:20:22] drwxrwx--x 2 boinc boinc 4096 Jun 3 11:41 shared [2025-06-03 13:20:22] -rw-r--r-- 2 boinc boinc 508884 Jun 3 11:41 input.tar.gz [2025-06-03 13:20:22] -rw-r--r-- 1 boinc boinc 0 Jun 3 11:41 boinc_lockfile [2025-06-03 13:20:22] -rw-r--r-- 1 boinc boinc 2524 Jun 3 11:41 pandaJob.out [2025-06-03 13:20:22] -rw------- 1 boinc boinc 982666 Jun 3 11:41 agis_schedconf.cvmfs.json [2025-06-03 13:20:22] -rw------- 1 boinc boinc 1603667 Jun 3 11:41 agis_ddmendpoints.agis.ALL.json [2025-06-03 13:20:22] drwx------ 4 boinc boinc 4096 Jun 3 11:41 pilot3 [2025-06-03 13:20:22] -rw------- 1 boinc boinc 178797096 Jun 3 13:19 HITS.45006595._018119.pool.root.1 [2025-06-03 13:20:22] -rw------- 1 boinc boinc 95 Jun 3 13:19 pilot_heartbeat.json [2025-06-03 13:20:22] -rw-r--r-- 1 boinc boinc 529 Jun 3 13:19 boinc_task_state.xml [2025-06-03 13:20:22] -rw------- 1 boinc boinc 1018 Jun 3 13:19 memory_monitor_summary.json [2025-06-03 13:20:22] -rw------- 1 boinc boinc 332873 Jun 3 13:19 log.45006595._018119.job.log.tgz.1 [2025-06-03 13:20:22] -rw------- 1 boinc boinc 7682 Jun 3 13:20 heartbeat.json [2025-06-03 13:20:22] -rw-r--r-- 1 boinc boinc 8192 Jun 3 13:20 boinc_mmap_file [2025-06-03 13:20:22] -rw-r--r-- 1 boinc boinc 27 Jun 3 13:20 wrapper_checkpoint.txt [2025-06-03 13:20:22] -rw------- 1 boinc boinc 4610 Jun 3 13:20 pilotlog.txt [2025-06-03 13:20:22] -rw------- 1 boinc boinc 545726 Jun 3 13:20 log.45006595._018119.job.log.1 [2025-06-03 13:20:22] -rw------- 1 boinc boinc 357 Jun 3 13:20 output.list [2025-06-03 13:20:22] -rw-r--r-- 1 boinc boinc 620 Jun 3 13:20 runtime_log [2025-06-03 13:20:22] -rw------- 1 boinc boinc 901120 Jun 3 13:20 result.tar.gz [2025-06-03 13:20:22] -rw-r--r-- 1 boinc boinc 8695 Jun 3 13:20 runtime_log.err [2025-06-03 13:20:22] -rw------- 1 boinc boinc 651 Jun 3 13:20 6c6MDm6CXg7nsSi4ap6QjLDmwznN0nGgGQJmUXzaDmOjhKDmjY2FSm.diag [2025-06-03 13:20:22] -rw-r--r-- 1 boinc boinc 21786 Jun 3 13:20 stderr.txt [2025-06-03 13:20:22] HITS file was successfully produced: [2025-06-03 13:20:22] -rw------- 1 boinc boinc 178797096 Jun 3 13:19 shared/HITS.pool.root.1 [2025-06-03 13:20:22] *** Contents of shared directory: *** [2025-06-03 13:20:22] total 387456 [2025-06-03 13:20:22] -rw-r--r-- 2 boinc boinc 216516440 Jun 3 11:41 ATLAS.root_0 [2025-06-03 13:20:22] -rw-r--r-- 2 boinc boinc 15093 Jun 3 11:41 start_atlas.sh [2025-06-03 13:20:22] -rw-r--r-- 2 boinc boinc 508884 Jun 3 11:41 input.tar.gz [2025-06-03 13:20:22] -rw------- 1 boinc boinc 178797096 Jun 3 13:19 HITS.pool.root.1 [2025-06-03 13:20:22] -rw------- 1 boinc boinc 901120 Jun 3 13:20 result.tar.gz 13:20:23 (236983): run_atlas exited; CPU time 44480.563349 13:20:23 (236983): called boinc_finish(0) </stderr_txt> ]]>
©2025 CERN