Name JZPKDmuFnd7nsSi4ap6QjLDmwznN0nGgGQJmUXzaDm2uWKDmIyc9tn_2
Workunit 232615415
Created 3 Jun 2025, 19:33:43 UTC
Sent 3 Jun 2025, 21:14:58 UTC
Report deadline 11 Jun 2025, 21:14:58 UTC
Received 3 Jun 2025, 23:46:00 UTC
Server state Over
Outcome Success
Client state Done
Exit status 0 (0x00000000)
Computer ID 10874960
Run time 2 hours 14 min 31 sec
CPU time 15 hours 35 min 42 sec
Validate state Valid
Credit 416.02
Device peak FLOPS 12.00 GFLOPS
Application version ATLAS Simulation v3.01 (native_mt)
x86_64-pc-linux-gnu
Peak working set size 2.67 GB
Peak swap size 3.01 GB
Peak disk usage 843.24 MB

Stderr output

<core_client_version>8.1.0</core_client_version>
<![CDATA[
<stderr_txt>
17:15:15 (2433593): wrapper (7.7.26015): starting
17:15:15 (2433593): wrapper: running run_atlas (--nthreads 8)
[2025-06-03 17:15:15] Arguments: --nthreads 8
[2025-06-03 17:15:15] Threads: 8
[2025-06-03 17:15:15] Checking for CVMFS
[2025-06-03 17:15:15] Probing /cvmfs/atlas.cern.ch... OK
[2025-06-03 17:15:15] Probing /cvmfs/atlas-condb.cern.ch... OK
[2025-06-03 17:15:15] Running cvmfs_config stat atlas.cern.ch
[2025-06-03 17:15:15] VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE
[2025-06-03 17:15:15] 2.12.6.0 4103777 22391 441436 146755 2 695 31513585 32503809 10885 16776704 0 191417808 99.637 176776647 22520 http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/atlas.cern.ch http://192.41.231.237:6081 1
[2025-06-03 17:15:15] CVMFS is ok
[2025-06-03 17:15:15] Efficiency of ATLAS tasks can be improved by the following measure(s):
[2025-06-03 17:15:15] The CVMFS client on this computer should be configured to use Cloudflare's openhtc.io.
[2025-06-03 17:15:15] Further information can be found at the LHC@home message board.
[2025-06-03 17:15:15] Using apptainer image /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7
[2025-06-03 17:15:15] Checking for apptainer binary...
[2025-06-03 17:15:15] Using apptainer found in PATH at /usr/bin/apptainer
[2025-06-03 17:15:15] Running /usr/bin/apptainer --version
[2025-06-03 17:15:15] apptainer version 1.4.1-1.el9
[2025-06-03 17:15:15] Checking apptainer works with /usr/bin/apptainer exec -B /cvmfs /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 hostname
[2025-06-03 17:15:16] c7-7-36.aglt2.org
[2025-06-03 17:15:16] apptainer works
[2025-06-03 17:15:16] Set ATHENA_PROC_NUMBER=8
[2025-06-03 17:15:16] Set ATHENA_CORE_NUMBER=8
[2025-06-03 17:15:16] Starting ATLAS job with PandaID=6665990251
[2025-06-03 17:15:16] Running command: /usr/bin/apptainer exec -B /cvmfs,/tmp/boinchome/slots/0 /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 sh start_atlas.sh
18:27:58 (2675712): wrapper (7.7.26015): starting
18:27:58 (2675712): wrapper: running run_atlas (--nthreads 8)
[2025-06-03 18:27:58] Arguments: --nthreads 8
[2025-06-03 18:27:58] Threads: 8
[2025-06-03 18:27:58] This job has been restarted, cleaning up previous attempt
[2025-06-03 18:27:58] Checking for CVMFS
[2025-06-03 18:27:58] Probing /cvmfs/atlas.cern.ch... OK
[2025-06-03 18:27:58] Probing /cvmfs/atlas-condb.cern.ch... OK
[2025-06-03 18:27:58] Running cvmfs_config stat atlas.cern.ch
[2025-06-03 18:27:58] VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE
[2025-06-03 18:27:58] 2.12.6.0 4103777 22464 434960 146760 2 140 16440951 32503808 11019 16776704 0 191876148 99.636 177123478 22539 http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/atlas.cern.ch http://192.41.231.237:6081 1
[2025-06-03 18:27:58] CVMFS is ok
[2025-06-03 18:27:58] Efficiency of ATLAS tasks can be improved by the following measure(s):
[2025-06-03 18:27:58] The CVMFS client on this computer should be configured to use Cloudflare's openhtc.io.
[2025-06-03 18:27:58] Further information can be found at the LHC@home message board.
[2025-06-03 18:27:58] Using apptainer image /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7
[2025-06-03 18:27:58] Checking for apptainer binary...
[2025-06-03 18:27:58] Using apptainer found in PATH at /usr/bin/apptainer
[2025-06-03 18:27:58] Running /usr/bin/apptainer --version
[2025-06-03 18:27:58] apptainer version 1.4.1-1.el9
[2025-06-03 18:27:58] Checking apptainer works with /usr/bin/apptainer exec -B /cvmfs /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 hostname
[2025-06-03 18:27:59] c7-7-36.aglt2.org
[2025-06-03 18:27:59] apptainer works
[2025-06-03 18:27:59] Set ATHENA_PROC_NUMBER=8
[2025-06-03 18:27:59] Set ATHENA_CORE_NUMBER=8
[2025-06-03 18:27:59] Starting ATLAS job with PandaID=6665990251
[2025-06-03 18:27:59] Running command: /usr/bin/apptainer exec -B /cvmfs,/tmp/boinchome/slots/0 /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 sh start_atlas.sh
[2025-06-03 19:39:46]  *** The last 200 lines of the pilot log: ***
[2025-06-03 19:39:46]  domain=
[2025-06-03 19:39:46]  filesize=290551
[2025-06-03 19:39:46]  filetype=log
[2025-06-03 19:39:46]  guid=23c221ae-6363-4e4d-964e-b7d56f4b4acf
[2025-06-03 19:39:46]  inputddms=['NDGF-T1_DATADISK', 'CERN-PROD_DATADISK']
[2025-06-03 19:39:46]  is_altstaged=None
[2025-06-03 19:39:46]  is_tar=False
[2025-06-03 19:39:46]  lfn=log.44871837._051705.job.log.tgz.1
[2025-06-03 19:39:46]  mtime=0
[2025-06-03 19:39:46]  protocol_id=None
[2025-06-03 19:39:46]  protocols=[{'endpoint': 'davs://dav.ndgf.org:443', 'flavour': 'WEBDAV', 'id': 331, 'path': '/atlas/disk/atlasdatadisk/rucio/'}]
[2025-06-03 19:39:46]  replicas=None
[2025-06-03 19:39:46]  scope=mc23_13p6TeV
[2025-06-03 19:39:46]  status=None
[2025-06-03 19:39:46]  status_code=0
[2025-06-03 19:39:46]  storage_token=
[2025-06-03 19:39:46]  surl=/tmp/boinchome/slots/0/PanDA_Pilot-6665990251/log.44871837._051705.job.log.tgz.1
[2025-06-03 19:39:46]  turl=davs://dav.ndgf.org:443/atlas/disk/atlasdatadisk/rucio/mc23_13p6TeV/56/45/log.44871837._051705.job.log.tgz.1
[2025-06-03 19:39:46]  workdir=None
[2025-06-03 19:39:46] ]
[2025-06-03 19:39:46] 2025-06-03 23:39:29,672 | INFO     | transferring file log.44871837._051705.job.log.tgz.1 from /tmp/boinchome/slots/0/PanDA_Pilot-6665990251/log.44871837._051705.job.log.tgz.1 to /tmp/boinchome/slots/
[2025-06-03 19:39:46] 2025-06-03 23:39:29,673 | INFO     | executing command: /usr/bin/env mv /tmp/boinchome/slots/0/PanDA_Pilot-6665990251/log.44871837._051705.job.log.tgz.1 /tmp/boinchome/slots/0/log.44871837._051705.job
[2025-06-03 19:39:46] 2025-06-03 23:39:29,695 | INFO     | adding to output.list: log.44871837._051705.job.log.tgz.1 davs://dav.ndgf.org:443/atlas/disk/atlasdatadisk/rucio/mc23_13p6TeV/56/45/log.44871837._051705.job.log.tg
[2025-06-03 19:39:46] 2025-06-03 23:39:29,696 | INFO     | alt stage-out settings: ['pl', 'write_lan', 'w', 'default'], allow_altstageout=False, remain_files=0, has_altstorage=True
[2025-06-03 19:39:46] 2025-06-03 23:39:29,697 | INFO     | summary of transferred files:
[2025-06-03 19:39:46] 2025-06-03 23:39:29,697 | INFO     |  -- lfn=log.44871837._051705.job.log.tgz.1, status_code=0, status=transferred
[2025-06-03 19:39:46] 2025-06-03 23:39:29,697 | INFO     | stage-out finished correctly
[2025-06-03 19:39:46] 2025-06-03 23:39:30,416 | INFO     | finished stage-out for finished payload, adding job to finished_jobs queue
[2025-06-03 19:39:46] 2025-06-03 23:39:30,453 | INFO     | job 6665990251 has state=finished
[2025-06-03 19:39:46] 2025-06-03 23:39:30,454 | INFO     | preparing for final server update for job 6665990251 in state='finished'
[2025-06-03 19:39:46] 2025-06-03 23:39:30,454 | INFO     | this job has now completed (state=finished)
[2025-06-03 19:39:46] 2025-06-03 23:39:30,454 | INFO     | pilot will not update the server (heartbeat message will be written to file)
[2025-06-03 19:39:46] 2025-06-03 23:39:30,455 | INFO     | log transfer has been attempted: DONE
[2025-06-03 19:39:46] 2025-06-03 23:39:30,455 | INFO     | job 6665990251 has finished - writing final server update
[2025-06-03 19:39:46] 2025-06-03 23:39:30,455 | INFO     | total number of processed events: 400 (read)
[2025-06-03 19:39:46] 2025-06-03 23:39:30,472 | INFO     | executing command: lscpu
[2025-06-03 19:39:46] 2025-06-03 23:39:30,546 | INFO     | executing command: export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase;source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh --quiet;lsetup
[2025-06-03 19:39:46] 2025-06-03 23:39:30,786 | INFO     | monitor loop #315: job 0:6665990251 is in state 'finished'
[2025-06-03 19:39:46] 2025-06-03 23:39:30,787 | INFO     | will abort job monitoring soon since job state=finished (job is still in queue)
[2025-06-03 19:39:46] 2025-06-03 23:39:33,288 | INFO     | monitor loop #316: job 0:6665990251 is in state 'finished'
[2025-06-03 19:39:46] 2025-06-03 23:39:33,289 | INFO     | will abort job monitoring soon since job state=finished (job is still in queue)
[2025-06-03 19:39:46] 2025-06-03 23:39:34,095 | INFO     | CPU arch script returned: x86-64-v3
[2025-06-03 19:39:46] 2025-06-03 23:39:34,095 | INFO     | using path: /tmp/boinchome/slots/0/PanDA_Pilot-6665990251/memory_monitor_summary.json (trf name=prmon)
[2025-06-03 19:39:46] 2025-06-03 23:39:34,096 | INFO     | extracted standard info from prmon json
[2025-06-03 19:39:46] 2025-06-03 23:39:34,096 | INFO     | extracted standard memory fields from prmon json
[2025-06-03 19:39:46] 2025-06-03 23:39:34,096 | WARNING  | GPU info not found in prmon json: 'gpu'
[2025-06-03 19:39:46] 2025-06-03 23:39:34,096 | WARNING  | format EVNTtoHITS has no such key: dbData
[2025-06-03 19:39:46] 2025-06-03 23:39:34,097 | WARNING  | format EVNTtoHITS has no such key: dbTime
[2025-06-03 19:39:46] 2025-06-03 23:39:34,098 | INFO     | fitting pss+swap vs Time
[2025-06-03 19:39:46] 2025-06-03 23:39:34,098 | INFO     | sum of square deviations: 77515872.0
[2025-06-03 19:39:46] 2025-06-03 23:39:34,098 | INFO     | sum of deviations: 5680434924.0
[2025-06-03 19:39:46] 2025-06-03 23:39:34,099 | INFO     | mean x: 1748991906.0
[2025-06-03 19:39:46] 2025-06-03 23:39:34,099 | INFO     | mean y: 2204594.1746031744
[2025-06-03 19:39:46] 2025-06-03 23:39:34,099 | INFO     | -- intersect: -128165542839.99895
[2025-06-03 19:39:46] 2025-06-03 23:39:34,099 | INFO     | intersect: -128165542839.99895
[2025-06-03 19:39:46] 2025-06-03 23:39:34,099 | INFO     | chi2: 0.8458938096538104
[2025-06-03 19:39:46] 2025-06-03 23:39:34,099 | INFO     | -- intersect: -128165542839.99895
[2025-06-03 19:39:46] 2025-06-03 23:39:34,099 | INFO     | current memory leak: 73.28 B/s (using 63 data points, chi2=0.85)
[2025-06-03 19:39:46] 2025-06-03 23:39:34,099 | INFO     | ..............................
[2025-06-03 19:39:46] 2025-06-03 23:39:34,099 | INFO     | . Timing measurements:
[2025-06-03 19:39:46] 2025-06-03 23:39:34,099 | INFO     | . get job = 0 s
[2025-06-03 19:39:46] 2025-06-03 23:39:34,100 | INFO     | . initial setup = 0 s
[2025-06-03 19:39:46] 2025-06-03 23:39:34,100 | INFO     | . payload setup = 5 s
[2025-06-03 19:39:46] 2025-06-03 23:39:34,100 | INFO     | . stage-in = 0 s
[2025-06-03 19:39:46] 2025-06-03 23:39:34,100 | INFO     | . payload execution = 4258 s
[2025-06-03 19:39:46] 2025-06-03 23:39:34,100 | INFO     | . stage-out = 0 s
[2025-06-03 19:39:46] 2025-06-03 23:39:34,100 | INFO     | . log creation = 0 s
[2025-06-03 19:39:46] 2025-06-03 23:39:34,100 | INFO     | ..............................
[2025-06-03 19:39:46] 2025-06-03 23:39:34,289 | INFO     | 
[2025-06-03 19:39:46] 2025-06-03 23:39:34,290 | INFO     | job summary report
[2025-06-03 19:39:46] 2025-06-03 23:39:34,290 | INFO     | --------------------------------------------------
[2025-06-03 19:39:46] 2025-06-03 23:39:34,290 | INFO     | PanDA job id: 6665990251
[2025-06-03 19:39:46] 2025-06-03 23:39:34,291 | INFO     | task id: 44871837
[2025-06-03 19:39:46] 2025-06-03 23:39:34,291 | INFO     | errors: (none)
[2025-06-03 19:39:46] 2025-06-03 23:39:34,291 | INFO     | status: LOG_TRANSFER = DONE 
[2025-06-03 19:39:46] 2025-06-03 23:39:34,291 | INFO     | pilot state: finished 
[2025-06-03 19:39:46] 2025-06-03 23:39:34,291 | INFO     | transexitcode: 0
[2025-06-03 19:39:46] 2025-06-03 23:39:34,291 | INFO     | exeerrorcode: 0
[2025-06-03 19:39:46] 2025-06-03 23:39:34,291 | INFO     | exeerrordiag: 
[2025-06-03 19:39:46] 2025-06-03 23:39:34,291 | INFO     | exitcode: 0
[2025-06-03 19:39:46] 2025-06-03 23:39:34,291 | INFO     | exitmsg: OK
[2025-06-03 19:39:46] 2025-06-03 23:39:34,291 | INFO     | cpuconsumptiontime: 29196 s
[2025-06-03 19:39:46] 2025-06-03 23:39:34,291 | INFO     | nevents: 400
[2025-06-03 19:39:46] 2025-06-03 23:39:34,291 | INFO     | neventsw: 0
[2025-06-03 19:39:46] 2025-06-03 23:39:34,291 | INFO     | pid: 2706052
[2025-06-03 19:39:46] 2025-06-03 23:39:34,291 | INFO     | pgrp: 2706052
[2025-06-03 19:39:46] 2025-06-03 23:39:34,291 | INFO     | corecount: 8
[2025-06-03 19:39:46] 2025-06-03 23:39:34,292 | INFO     | event service: False
[2025-06-03 19:39:46] 2025-06-03 23:39:34,292 | INFO     | sizes: {0: 2405234, 4: 2405440, 11: 2405468, 4268: 2431334, 4270: 2440331, 4274: 2440445}
[2025-06-03 19:39:46] 2025-06-03 23:39:34,292 | INFO     | --------------------------------------------------
[2025-06-03 19:39:46] 2025-06-03 23:39:34,292 | INFO     | 
[2025-06-03 19:39:46] 2025-06-03 23:39:34,292 | INFO     | executing command: ls -lF /tmp/boinchome/slots/0
[2025-06-03 19:39:46] 2025-06-03 23:39:34,308 | INFO     | queue jobs had 0 job(s) [purged]
[2025-06-03 19:39:46] 2025-06-03 23:39:34,309 | INFO     | queue payloads had 0 job(s) [purged]
[2025-06-03 19:39:46] 2025-06-03 23:39:34,309 | INFO     | queue data_in had 0 job(s) [purged]
[2025-06-03 19:39:46] 2025-06-03 23:39:34,309 | INFO     | queue data_out had 0 job(s) [purged]
[2025-06-03 19:39:46] 2025-06-03 23:39:34,309 | INFO     | queue current_data_in had 0 job(s) [purged]
[2025-06-03 19:39:46] 2025-06-03 23:39:34,309 | INFO     | queue validated_jobs had 0 job(s) [purged]
[2025-06-03 19:39:46] 2025-06-03 23:39:34,309 | INFO     | queue validated_payloads had 0 job(s) [purged]
[2025-06-03 19:39:46] 2025-06-03 23:39:34,309 | INFO     | queue monitored_payloads had 0 job(s) [purged]
[2025-06-03 19:39:46] 2025-06-03 23:39:34,309 | INFO     | queue finished_jobs had 0 job(s) [purged]
[2025-06-03 19:39:46] 2025-06-03 23:39:34,309 | INFO     | queue finished_payloads had 0 job(s) [purged]
[2025-06-03 19:39:46] 2025-06-03 23:39:34,309 | INFO     | queue finished_data_in had 1 job(s) [purged]
[2025-06-03 19:39:46] 2025-06-03 23:39:34,309 | INFO     | queue finished_data_out had 0 job(s) [purged]
[2025-06-03 19:39:46] 2025-06-03 23:39:34,309 | INFO     | queue failed_jobs had 0 job(s) [purged]
[2025-06-03 19:39:46] 2025-06-03 23:39:34,309 | INFO     | queue failed_payloads had 0 job(s) [purged]
[2025-06-03 19:39:46] 2025-06-03 23:39:34,310 | INFO     | queue failed_data_in had 0 job(s) [purged]
[2025-06-03 19:39:46] 2025-06-03 23:39:34,310 | INFO     | queue failed_data_out had 0 job(s) [purged]
[2025-06-03 19:39:46] 2025-06-03 23:39:34,310 | INFO     | queue completed_jobs had 0 job(s) [purged]
[2025-06-03 19:39:46] 2025-06-03 23:39:34,310 | INFO     | queue completed_jobids has 1 job(s)
[2025-06-03 19:39:46] 2025-06-03 23:39:34,310 | INFO     | queue realtimelog_payloads had 0 job(s) [purged]
[2025-06-03 19:39:46] 2025-06-03 23:39:34,310 | INFO     | queue messages had 0 job(s) [purged]
[2025-06-03 19:39:46] 2025-06-03 23:39:34,310 | INFO     | job 6665990251 has completed (purged errors)
[2025-06-03 19:39:46] 2025-06-03 23:39:34,310 | INFO     | overall cleanup function is called
[2025-06-03 19:39:46] 2025-06-03 23:39:35,318 | INFO     | --- collectZombieJob: --- 10, [2706052]
[2025-06-03 19:39:46] 2025-06-03 23:39:35,318 | INFO     | zombie collector waiting for pid 2706052
[2025-06-03 19:39:46] 2025-06-03 23:39:35,319 | INFO     | harmless exception when collecting zombies: [Errno 10] No child processes
[2025-06-03 19:39:46] 2025-06-03 23:39:35,319 | INFO     | collected zombie processes
[2025-06-03 19:39:46] 2025-06-03 23:39:35,319 | INFO     | will attempt to kill all subprocesses of pid=2706052
[2025-06-03 19:39:46] 2025-06-03 23:39:35,580 | INFO     | process IDs to be killed: [2706052] (in reverse order)
[2025-06-03 19:39:46] 2025-06-03 23:39:35,669 | WARNING  | found no corresponding commands to process id(s)
[2025-06-03 19:39:46] 2025-06-03 23:39:35,670 | INFO     | Do not look for orphan processes in BOINC jobs
[2025-06-03 19:39:46] 2025-06-03 23:39:35,681 | INFO     | did not find any defunct processes belonging to 2706052
[2025-06-03 19:39:46] 2025-06-03 23:39:35,694 | INFO     | did not find any defunct processes belonging to 2706052
[2025-06-03 19:39:46] 2025-06-03 23:39:35,694 | INFO     | ready for new job
[2025-06-03 19:39:46] 2025-06-03 23:39:35,694 | INFO     | pilot has finished with previous job - re-establishing logging
[2025-06-03 19:39:46] 2025-06-03 23:39:35,695 | INFO     | **************************************
[2025-06-03 19:39:46] 2025-06-03 23:39:35,695 | INFO     | ***  PanDA Pilot version 3.10.2.2  ***
[2025-06-03 19:39:46] 2025-06-03 23:39:35,695 | INFO     | **************************************
[2025-06-03 19:39:46] 2025-06-03 23:39:35,695 | INFO     | 
[2025-06-03 19:39:46] 2025-06-03 23:39:35,711 | INFO     | architecture information:
[2025-06-03 19:39:46] 2025-06-03 23:39:35,712 | INFO     | executing command: cat /etc/os-release
[2025-06-03 19:39:46] 2025-06-03 23:39:35,723 | INFO     | cat /etc/os-release:
[2025-06-03 19:39:46] NAME="CentOS Linux"
[2025-06-03 19:39:46] VERSION="7 (Core)"
[2025-06-03 19:39:46] ID="centos"
[2025-06-03 19:39:46] ID_LIKE="rhel fedora"
[2025-06-03 19:39:46] VERSION_ID="7"
[2025-06-03 19:39:46] PRETTY_NAME="CentOS Linux 7 (Core)"
[2025-06-03 19:39:46] ANSI_COLOR="0;31"
[2025-06-03 19:39:46] CPE_NAME="cpe:/o:centos:centos:7"
[2025-06-03 19:39:46] HOME_URL="https://www.centos.org/"
[2025-06-03 19:39:46] BUG_REPORT_URL="https://bugs.centos.org/"
[2025-06-03 19:39:46] 
[2025-06-03 19:39:46] CENTOS_MANTISBT_PROJECT="CentOS-7"
[2025-06-03 19:39:46] CENTOS_MANTISBT_PROJECT_VERSION="7"
[2025-06-03 19:39:46] REDHAT_SUPPORT_PRODUCT="centos"
[2025-06-03 19:39:46] REDHAT_SUPPORT_PRODUCT_VERSION="7"
[2025-06-03 19:39:46] 
[2025-06-03 19:39:46] 2025-06-03 23:39:35,724 | INFO     | **************************************
[2025-06-03 19:39:46] 2025-06-03 23:39:36,226 | INFO     | executing command: df -mP /tmp/boinchome/slots/0
[2025-06-03 19:39:46] 2025-06-03 23:39:36,244 | INFO     | sufficient remaining disk space (95691997184 B)
[2025-06-03 19:39:46] 2025-06-03 23:39:36,245 | WARNING  | since timefloor is set to 0, pilot was only allowed to run one job
[2025-06-03 19:39:46] 2025-06-03 23:39:36,245 | INFO     | current server update state: UPDATING_FINAL
[2025-06-03 19:39:46] 2025-06-03 23:39:36,245 | INFO     | update_server=False
[2025-06-03 19:39:46] 2025-06-03 23:39:36,245 | WARNING  | setting graceful_stop since proceed_with_getjob() returned False (pilot will end)
[2025-06-03 19:39:46] 2025-06-03 23:39:36,245 | WARNING  | data:copytool_out:received graceful stop - abort after this iteration
[2025-06-03 19:39:46] 2025-06-03 23:39:36,398 | INFO     | found 0 job(s) in 20 queues
[2025-06-03 19:39:46] 2025-06-03 23:39:36,398 | WARNING  | pilot monitor received instruction that args.graceful_stop has been set
[2025-06-03 19:39:46] 2025-06-03 23:39:36,398 | WARNING  | will wait for a maximum of 300 s for threads to finish
[2025-06-03 19:39:46] 2025-06-03 23:39:36,459 | WARNING  | data:queue_monitoring:received graceful stop - abort after this iteration
[2025-06-03 19:39:46] 2025-06-03 23:39:36,605 | INFO     | all job control threads have been joined
[2025-06-03 19:39:46] 2025-06-03 23:39:36,796 | WARNING  | job:job_monitor:received graceful stop - abort after this iteration
[2025-06-03 19:39:46] 2025-06-03 23:39:36,796 | INFO     | aborting loop
[2025-06-03 19:39:46] 2025-06-03 23:39:37,077 | INFO     | all data control threads have been joined
[2025-06-03 19:39:46] 2025-06-03 23:39:37,224 | INFO     | all payload control threads have been joined
[2025-06-03 19:39:46] 2025-06-03 23:39:37,250 | INFO     | [job] retrieve thread has finished
[2025-06-03 19:39:46] 2025-06-03 23:39:37,607 | INFO     | [payload] execute_payloads thread has finished
[2025-06-03 19:39:46] 2025-06-03 23:39:37,608 | INFO     | [data] copytool_in thread has finished
[2025-06-03 19:39:46] 2025-06-03 23:39:37,610 | INFO     | [job] control thread has finished
[2025-06-03 19:39:46] 2025-06-03 23:39:37,801 | INFO     | [job] job monitor thread has finished
[2025-06-03 19:39:46] 2025-06-03 23:39:38,059 | INFO     | [payload] failed_post thread has finished
[2025-06-03 19:39:46] 2025-06-03 23:39:38,083 | INFO     | [data] control thread has finished
[2025-06-03 19:39:46] 2025-06-03 23:39:38,229 | INFO     | [payload] control thread has finished
[2025-06-03 19:39:46] 2025-06-03 23:39:38,250 | INFO     | [data] copytool_out thread has finished
[2025-06-03 19:39:46] 2025-06-03 23:39:38,319 | INFO     | [payload] run_realtimelog thread has finished
[2025-06-03 19:39:46] 2025-06-03 23:39:38,344 | INFO     | [payload] validate_post thread has finished
[2025-06-03 19:39:46] 2025-06-03 23:39:38,425 | INFO     | [job] create_data_payload thread has finished
[2025-06-03 19:39:46] 2025-06-03 23:39:38,570 | INFO     | [payload] validate_pre thread has finished
[2025-06-03 19:39:46] 2025-06-03 23:39:38,570 | INFO     | [job] validate thread has finished
[2025-06-03 19:39:46] 2025-06-03 23:39:39,174 | WARNING  | job:queue_monitor:received graceful stop - abort after this iteration
[2025-06-03 19:39:46] 2025-06-03 23:39:40,179 | INFO     | [job] queue monitor thread has finished
[2025-06-03 19:39:46] 2025-06-03 23:39:40,464 | INFO     | [data] queue_monitor thread has finished
[2025-06-03 19:39:46] 2025-06-03 23:39:40,810 | INFO     | only monitor.control thread still running - safe to abort: ['<_MainThread(MainThread, started 140709432977216)>', '<ExcThread(monitor, started 140708799297280)>']
[2025-06-03 19:39:46] 2025-06-03 23:39:41,423 | WARNING  | job_aborted has been set - aborting pilot monitoring
[2025-06-03 19:39:46] 2025-06-03 23:39:41,423 | INFO     | [monitor] control thread has ended
[2025-06-03 19:39:46] 2025-06-03 23:39:45,829 | INFO     | all workflow threads have been joined
[2025-06-03 19:39:46] 2025-06-03 23:39:45,830 | INFO     | end of generic workflow (traces error code: 0)
[2025-06-03 19:39:46] 2025-06-03 23:39:45,830 | INFO     | traces error code: 0
[2025-06-03 19:39:46] 2025-06-03 23:39:45,830 | INFO     | pilot has finished (exit code=0, shell exit code=0)
[2025-06-03 19:39:46] 2025-06-03 23:39:45,934 [wrapper] ==== pilot stdout END ====
[2025-06-03 19:39:46] 2025-06-03 23:39:45,936 [wrapper] ==== wrapper stdout RESUME ====
[2025-06-03 19:39:46] 2025-06-03 23:39:45,938 [wrapper] pilotpid: 2686645
[2025-06-03 19:39:46] 2025-06-03 23:39:45,939 [wrapper] Pilot exit status: 0
[2025-06-03 19:39:46] 2025-06-03 23:39:45,947 [wrapper] pandaids: 6665990251 6665990251
[2025-06-03 19:39:46] 2025-06-03 23:39:46,013 [wrapper] cleanup supervisor_pilot 2999671 2686646
[2025-06-03 19:39:46] 2025-06-03 23:39:46,016 [wrapper] Test setup, not cleaning
[2025-06-03 19:39:46] 2025-06-03 23:39:46,018 [wrapper] apfmon messages muted
[2025-06-03 19:39:46] 2025-06-03 23:39:46,020 [wrapper] ==== wrapper stdout END ====
[2025-06-03 19:39:46] 2025-06-03 23:39:46,022 [wrapper] ==== wrapper stderr END ====
[2025-06-03 19:39:46]  *** Error codes and diagnostics ***
[2025-06-03 19:39:46]     "exeErrorCode": 0,
[2025-06-03 19:39:46]     "exeErrorDiag": "",
[2025-06-03 19:39:46]     "pilotErrorCode": 0,
[2025-06-03 19:39:46]     "pilotErrorDiag": "",
[2025-06-03 19:39:46]  *** Listing of results directory ***
[2025-06-03 19:39:46] total 641960
[2025-06-03 19:39:46] drwx------. 4 boincer umatlas      4096 Apr  3 04:00 pilot3
[2025-06-03 19:39:46] -rw-r--r--. 1 boincer umatlas    495897 May 26 09:12 pilot3.tar.gz
[2025-06-03 19:39:46] -rw-r--r--. 1 boincer umatlas      5111 May 26 09:38 queuedata.json
[2025-06-03 19:39:46] -rwx------. 1 boincer umatlas     37140 May 26 09:41 runpilot2-wrapper.sh
[2025-06-03 19:39:46] -rw-r--r--. 1 boincer umatlas       100 Jun  3 17:15 wrapper_26015_x86_64-pc-linux-gnu
[2025-06-03 19:39:46] -rwxr-xr-x. 1 boincer umatlas      7986 Jun  3 17:15 run_atlas
[2025-06-03 19:39:46] -rw-r--r--. 1 boincer umatlas       105 Jun  3 17:15 job.xml
[2025-06-03 19:39:46] -rw-r--r--. 3 boincer umatlas 226307159 Jun  3 17:15 EVNT.44871834._002058.pool.root.1
[2025-06-03 19:39:46] -rw-r--r--. 3 boincer umatlas 226307159 Jun  3 17:15 ATLAS.root_0
[2025-06-03 19:39:46] -rw-r--r--. 2 boincer umatlas     15093 Jun  3 17:15 start_atlas.sh
[2025-06-03 19:39:46] drwxrwx--x. 2 boincer umatlas      4096 Jun  3 17:15 shared
[2025-06-03 19:39:46] -rw-r--r--. 2 boincer umatlas    508883 Jun  3 17:15 input.tar.gz
[2025-06-03 19:39:46] -rw-r--r--. 1 boincer umatlas         0 Jun  3 17:15 boinc_lockfile
[2025-06-03 19:39:46] -rw-------. 1 boincer umatlas   1603667 Jun  3 17:15 agis_ddmendpoints.agis.ALL.json
[2025-06-03 19:39:46] -rw-r--r--. 1 boincer umatlas      2559 Jun  3 18:27 pandaJob.out
[2025-06-03 19:39:46] -rw-------. 1 boincer umatlas    986861 Jun  3 18:28 agis_schedconf.cvmfs.json
[2025-06-03 19:39:46] -rw-r--r--. 1 boincer umatlas      6934 Jun  3 19:36 init_data.xml
[2025-06-03 19:39:46] -rw-------. 1 boincer umatlas        96 Jun  3 19:38 pilot_heartbeat.json
[2025-06-03 19:39:46] -rw-------. 1 boincer umatlas 199497601 Jun  3 19:39 HITS.44871837._051705.pool.root.1
[2025-06-03 19:39:46] -rw-r--r--. 1 boincer umatlas       529 Jun  3 19:39 boinc_task_state.xml
[2025-06-03 19:39:46] -rw-------. 1 boincer umatlas      1023 Jun  3 19:39 memory_monitor_summary.json
[2025-06-03 19:39:46] -rw-------. 1 boincer umatlas    290551 Jun  3 19:39 log.44871837._051705.job.log.tgz.1
[2025-06-03 19:39:46] -rw-------. 1 boincer umatlas      7680 Jun  3 19:39 heartbeat.json
[2025-06-03 19:39:46] -rw-r--r--. 1 boincer umatlas      8192 Jun  3 19:39 boinc_mmap_file
[2025-06-03 19:39:46] -rw-r--r--. 1 boincer umatlas        27 Jun  3 19:39 wrapper_checkpoint.txt
[2025-06-03 19:39:46] -rw-------. 1 boincer umatlas      4353 Jun  3 19:39 pilotlog.txt
[2025-06-03 19:39:46] -rw-------. 1 boincer umatlas    422744 Jun  3 19:39 log.44871837._051705.job.log.1
[2025-06-03 19:39:46] -rw-------. 1 boincer umatlas       357 Jun  3 19:39 output.list
[2025-06-03 19:39:46] -rw-r--r--. 1 boincer umatlas       620 Jun  3 19:39 runtime_log
[2025-06-03 19:39:46] -rw-------. 1 boincer umatlas    727040 Jun  3 19:39 result.tar.gz
[2025-06-03 19:39:46] -rw-r--r--. 1 boincer umatlas      8739 Jun  3 19:39 runtime_log.err
[2025-06-03 19:39:46] -rw-------. 1 boincer umatlas       774 Jun  3 19:39 JZPKDmuFnd7nsSi4ap6QjLDmwznN0nGgGQJmUXzaDm2uWKDmIyc9tn.diag
[2025-06-03 19:39:46] -rw-r--r--. 1 boincer umatlas     23192 Jun  3 19:39 stderr.txt
[2025-06-03 19:39:46] HITS file was successfully produced:
[2025-06-03 19:39:46] -rw-------. 1 boincer umatlas 199497601 Jun  3 19:39 shared/HITS.pool.root.1
[2025-06-03 19:39:46]  *** Contents of shared directory: ***
[2025-06-03 19:39:46] total 417060
[2025-06-03 19:39:46] -rw-r--r--. 3 boincer umatlas 226307159 Jun  3 17:15 ATLAS.root_0
[2025-06-03 19:39:46] -rw-r--r--. 2 boincer umatlas     15093 Jun  3 17:15 start_atlas.sh
[2025-06-03 19:39:46] -rw-r--r--. 2 boincer umatlas    508883 Jun  3 17:15 input.tar.gz
[2025-06-03 19:39:46] -rw-------. 1 boincer umatlas 199497601 Jun  3 19:39 HITS.pool.root.1
[2025-06-03 19:39:46] -rw-------. 1 boincer umatlas    727040 Jun  3 19:39 result.tar.gz
19:39:47 (2675712): run_atlas exited; CPU time 29297.616631
19:39:47 (2675712): called boinc_finish(0)

</stderr_txt>
]]>


©2025 CERN