1) Message boards : ATLAS application : ATLAS tasks not using CPU but still in process (Message 41213)
Posted 9 Jan 2020 by csbyseti
Post:


What are you doing with a V5 VBOX?!! It's up to 6.1 now.
I would suggest updating and trying to get Atlas tasks again.
You are seriously out of date.

And go through Yeti's checklist to double check everything.


Got many problems with 6.0.14, reinstalled 5.2,xx
2) Message boards : ATLAS application : No Atlas tasks in three weeks (Message 41212)
Posted 9 Jan 2020 by csbyseti
Post:
Thats the normal behaviour of Atlas scheduling. Even if Projektstatus shows enough WU's the Server won't send them.
And if there are other Sub-Projekts selected Boinc get WU's for other Projekts and the Queue is filled up with other Wu's.

The LHC Subprojekts are so different that the best thing is to run one Boinc-Instance for every subprojekt you want to crunch.

You can be happy if a pure Atlas Instance won't dry out.

Follows a part of one Logfile of a pure Atlas Instance, more than1 hour with result uploads and no new work

Ryzen3900x-Nr-2

8582 LHC@home 09.01.2020 08:29:52 No tasks sent
8583 LHC@home 09.01.2020 08:29:52 No tasks are available for ATLAS Simulation
8584 LHC@home 09.01.2020 08:29:52 Tasks for AMD/ATI GPU are available, but your preferences are set to not accept them
8585 LHC@home 09.01.2020 08:43:15 Computation for task QkPKDmwNc9vn9Rq4apoT9bVoABFKDmABFKDm70BMDmABFKDmdoWzon_0 finished
8586 LHC@home 09.01.2020 08:43:15 Starting task TdAODmpLd9vnsSi4apGgGQJmABFKDmABFKDmY9ZXDmABFKDmT4l7Uo_0
8587 LHC@home 09.01.2020 08:43:17 Started upload of QkPKDmwNc9vn9Rq4apoT9bVoABFKDmABFKDm70BMDmABFKDmdoWzon_0_r1316286078_ATLAS_result
8588 LHC@home 09.01.2020 08:43:17 Started upload of QkPKDmwNc9vn9Rq4apoT9bVoABFKDmABFKDm70BMDmABFKDmdoWzon_0_r1316286078_ATLAS_hits
8589 LHC@home 09.01.2020 08:43:24 Finished upload of QkPKDmwNc9vn9Rq4apoT9bVoABFKDmABFKDm70BMDmABFKDmdoWzon_0_r1316286078_ATLAS_result
8590 LHC@home 09.01.2020 08:46:30 Finished upload of QkPKDmwNc9vn9Rq4apoT9bVoABFKDmABFKDm70BMDmABFKDmdoWzon_0_r1316286078_ATLAS_hits
8591 LHC@home 09.01.2020 08:46:33 Sending scheduler request: To fetch work.
8592 LHC@home 09.01.2020 08:46:33 Reporting 1 completed tasks
8593 LHC@home 09.01.2020 08:46:33 Requesting new tasks for CPU
8594 LHC@home 09.01.2020 08:46:35 Scheduler request completed: got 0 new tasks
8595 LHC@home 09.01.2020 08:46:35 No tasks sent
8596 LHC@home 09.01.2020 08:46:35 No tasks are available for ATLAS Simulation
8597 LHC@home 09.01.2020 08:46:35 Tasks for AMD/ATI GPU are available, but your preferences are set to not accept them
8598 LHC@home 09.01.2020 08:57:45 Sending scheduler request: To fetch work.
8599 LHC@home 09.01.2020 08:57:45 Requesting new tasks for CPU
8600 LHC@home 09.01.2020 08:57:47 Scheduler request completed: got 0 new tasks
8601 LHC@home 09.01.2020 08:57:47 No tasks sent
8602 LHC@home 09.01.2020 08:57:47 No tasks are available for ATLAS Simulation
8603 LHC@home 09.01.2020 08:57:47 Tasks for AMD/ATI GPU are available, but your preferences are set to not accept them
8604 LHC@home 09.01.2020 09:11:04 Computation for task TieKDmQHc9vnsSi4apGgGQJmABFKDmABFKDmhJoWDmABFKDmbjQnJo_0 finished
8605 LHC@home 09.01.2020 09:11:04 Starting task 6okLDmJhd9vnsSi4apGgGQJmABFKDmABFKDmUJkXDmABFKDmJCjdfn_0
8606 LHC@home 09.01.2020 09:11:06 Started upload of TieKDmQHc9vnsSi4apGgGQJmABFKDmABFKDmhJoWDmABFKDmbjQnJo_0_r1554343016_ATLAS_result
8607 LHC@home 09.01.2020 09:11:06 Started upload of TieKDmQHc9vnsSi4apGgGQJmABFKDmABFKDmhJoWDmABFKDmbjQnJo_0_r1554343016_ATLAS_hits
8608 LHC@home 09.01.2020 09:11:12 Finished upload of TieKDmQHc9vnsSi4apGgGQJmABFKDmABFKDmhJoWDmABFKDmbjQnJo_0_r1554343016_ATLAS_result
8609 LHC@home 09.01.2020 09:14:11 Finished upload of TieKDmQHc9vnsSi4apGgGQJmABFKDmABFKDmhJoWDmABFKDmbjQnJo_0_r1554343016_ATLAS_hits
8610 LHC@home 09.01.2020 09:14:13 Sending scheduler request: To fetch work.
8611 LHC@home 09.01.2020 09:14:13 Reporting 1 completed tasks
8612 LHC@home 09.01.2020 09:14:13 Requesting new tasks for CPU
8613 LHC@home 09.01.2020 09:14:15 Scheduler request completed: got 0 new tasks
8614 LHC@home 09.01.2020 09:14:15 No tasks sent
8615 LHC@home 09.01.2020 09:14:15 No tasks are available for ATLAS Simulation
8616 LHC@home 09.01.2020 09:14:15 Tasks for AMD/ATI GPU are available, but your preferences are set to not accept them
8617 LHC@home 09.01.2020 09:20:25 Sending scheduler request: To fetch work.
8618 LHC@home 09.01.2020 09:20:25 Requesting new tasks for CPU
8619 LHC@home 09.01.2020 09:20:27 Scheduler request completed: got 0 new tasks
8620 LHC@home 09.01.2020 09:20:27 No tasks sent
8621 LHC@home 09.01.2020 09:20:27 No tasks are available for ATLAS Simulation
8622 LHC@home 09.01.2020 09:20:27 Tasks for AMD/ATI GPU are available, but your preferences are set to not accept them
8623 LHC@home 09.01.2020 09:21:40 Computation for task hjMMDmWtc9vnsSi4apGgGQJmABFKDmABFKDmb7JXDmABFKDmbie2Wn_0 finished
8624 LHC@home 09.01.2020 09:21:41 Starting task ggXNDmt1d9vnsSi4apGgGQJmABFKDmABFKDmYGxXDmABFKDmMdz94n_0
8625 LHC@home 09.01.2020 09:21:43 Started upload of hjMMDmWtc9vnsSi4apGgGQJmABFKDmABFKDmb7JXDmABFKDmbie2Wn_0_r1473116892_ATLAS_result
8626 LHC@home 09.01.2020 09:21:43 Started upload of hjMMDmWtc9vnsSi4apGgGQJmABFKDmABFKDmb7JXDmABFKDmbie2Wn_0_r1473116892_ATLAS_hits
8627 LHC@home 09.01.2020 09:21:50 Finished upload of hjMMDmWtc9vnsSi4apGgGQJmABFKDmABFKDmb7JXDmABFKDmbie2Wn_0_r1473116892_ATLAS_result
8628 LHC@home 09.01.2020 09:24:54 Finished upload of hjMMDmWtc9vnsSi4apGgGQJmABFKDmABFKDmb7JXDmABFKDmbie2Wn_0_r1473116892_ATLAS_hits
8629 LHC@home 09.01.2020 09:24:58 Sending scheduler request: To fetch work.
8630 LHC@home 09.01.2020 09:24:58 Reporting 1 completed tasks
8631 LHC@home 09.01.2020 09:24:58 Requesting new tasks for CPU
8632 LHC@home 09.01.2020 09:25:00 Scheduler request completed: got 0 new tasks
8633 LHC@home 09.01.2020 09:25:00 No tasks sent
8634 LHC@home 09.01.2020 09:25:00 No tasks are available for ATLAS Simulation
8635 LHC@home 09.01.2020 09:25:00 Tasks for AMD/ATI GPU are available, but your preferences are set to not accept them
8636 LHC@home 09.01.2020 09:27:53 Computation for task eR9MDmLfd9vnsSi4apGgGQJmABFKDmABFKDmwgjXDmABFKDm5TaS5n_0 finished
8637 LHC@home 09.01.2020 09:27:53 Starting task JXtKDm0We9vn9Rq4apoT9bVoABFKDmABFKDmkmbNDmABFKDmmpV8en_0
8638 LHC@home 09.01.2020 09:27:55 Started upload of eR9MDmLfd9vnsSi4apGgGQJmABFKDmABFKDmwgjXDmABFKDm5TaS5n_0_r1815697472_ATLAS_result
8639 LHC@home 09.01.2020 09:27:55 Started upload of eR9MDmLfd9vnsSi4apGgGQJmABFKDmABFKDmwgjXDmABFKDm5TaS5n_0_r1815697472_ATLAS_hits
8640 LHC@home 09.01.2020 09:28:02 Finished upload of eR9MDmLfd9vnsSi4apGgGQJmABFKDmABFKDmwgjXDmABFKDm5TaS5n_0_r1815697472_ATLAS_result
8641 LHC@home 09.01.2020 09:31:07 Finished upload of eR9MDmLfd9vnsSi4apGgGQJmABFKDmABFKDmwgjXDmABFKDm5TaS5n_0_r1815697472_ATLAS_hits
8642 LHC@home 09.01.2020 09:31:10 Sending scheduler request: To fetch work.
8643 LHC@home 09.01.2020 09:31:10 Reporting 1 completed tasks
8644 LHC@home 09.01.2020 09:31:10 Requesting new tasks for CPU
8645 LHC@home 09.01.2020 09:31:11 Scheduler request completed: got 0 new tasks
8646 LHC@home 09.01.2020 09:31:11 No tasks sent
8647 LHC@home 09.01.2020 09:31:11 No tasks are available for ATLAS Simulation
8648 LHC@home 09.01.2020 09:31:11 Tasks for AMD/ATI GPU are available, but your preferences are set to not accept them
8649 LHC@home 09.01.2020 09:32:46 Computation for task zbkMDm7ed9vnsSi4apGgGQJmABFKDmABFKDmFCjXDmABFKDmsv0Kin_0 finished
8650 LHC@home 09.01.2020 09:32:46 Starting task pPFKDmUoe9vnsSi4apGgGQJmABFKDmABFKDmgHQYDmABFKDm1TOexm_0
8651 LHC@home 09.01.2020 09:32:48 Started upload of zbkMDm7ed9vnsSi4apGgGQJmABFKDmABFKDmFCjXDmABFKDmsv0Kin_0_r1469070031_ATLAS_result
8652 LHC@home 09.01.2020 09:32:48 Started upload of zbkMDm7ed9vnsSi4apGgGQJmABFKDmABFKDmFCjXDmABFKDmsv0Kin_0_r1469070031_ATLAS_hits
8653 LHC@home 09.01.2020 09:32:55 Finished upload of zbkMDm7ed9vnsSi4apGgGQJmABFKDmABFKDmFCjXDmABFKDmsv0Kin_0_r1469070031_ATLAS_result
8654 LHC@home 09.01.2020 09:35:55 Finished upload of zbkMDm7ed9vnsSi4apGgGQJmABFKDmABFKDmFCjXDmABFKDmsv0Kin_0_r1469070031_ATLAS_hits
8655 LHC@home 09.01.2020 09:35:57 Sending scheduler request: To fetch work.
8656 LHC@home 09.01.2020 09:35:57 Reporting 1 completed tasks
8657 LHC@home 09.01.2020 09:35:57 Requesting new tasks for CPU
8658 LHC@home 09.01.2020 09:35:59 Scheduler request completed: got 0 new tasks
8659 LHC@home 09.01.2020 09:35:59 No tasks sent
8660 LHC@home 09.01.2020 09:35:59 No tasks are available for ATLAS Simulation
8661 LHC@home 09.01.2020 09:35:59 Tasks for AMD/ATI GPU are available, but your preferences are set to not accept them
8662 LHC@home 09.01.2020 09:48:09 Sending scheduler request: To fetch work.
8663 LHC@home 09.01.2020 09:48:09 Requesting new tasks for CPU
8664 LHC@home 09.01.2020 09:48:11 Scheduler request completed: got 0 new tasks
8665 LHC@home 09.01.2020 09:48:11 No tasks sent
8666 LHC@home 09.01.2020 09:48:11 No tasks are available for ATLAS Simulation
8667 LHC@home 09.01.2020 09:48:11 Tasks for AMD/ATI GPU are available, but your preferences are set to not accept them
3) Message boards : ATLAS application : Atlas Verison 2 WU's crash (at startup?) (Message 40242)
Posted 22 Oct 2019 by csbyseti
Post:
This sounds good.

Thanks for the fast reaction on Friday.

On my i7-5820k wu's are failling also the whole weekend. Updated virtualbox now. Hope it will solve the Problems on this machine.
4) Message boards : ATLAS application : Atlas Verison 2 WU's crash (at startup?) (Message 40188)
Posted 18 Oct 2019 by csbyseti
Post:
Got on one VM following error:

https://c.web.de/@309282286350637121/m6p10ZAuQlOkGGaU2RipmQ

I 've seen this error also on a machine at WU restart.
5) Message boards : ATLAS application : Atlas Verison 2 WU's crash (at startup?) (Message 40187)
Posted 18 Oct 2019 by csbyseti
Post:
Tomorrow morning i see on all Computers Atlas Wu's only with 110%-150% CPU Load but ~10h run time. CPU Load Value should be at 350-390% with 4 threads.

I think all Wu's from yesterday are crashed, but without working Alt-F2 it's not so easy to get something displayed.

There must be a problem in the actual set of Wu's older version 2 Wu's worked normal.

Got some VM Text with Kernel Panic displayed.

Some team members got the same Problem.
6) Message boards : ATLAS application : ATLAS vbox version 2.00 (Message 40186)
Posted 17 Oct 2019 by csbyseti
Post:
The currently active monitoring script doesn't output anything at ALT-F2 until at least 1 event has finished.
As the current ATLAS batch sometimes needs up to 30-40m per event it may look like ALT-F2 has crashed.

Be patient.
I already sent David a suggestion for an improved monitoring but there are a few (hopefully minor) issues to solve before it can go live.


I don't think that it will work after 1 Event is finished or 1 Event will use the whole WU run time.

Normal i use this funktion only if the WU Runtime rises in an abnormal way.

At startup of a WU i use only the CPU Utilisation % of BoincTask for the first hour.

Perhaps it's a VirtualBox Version Problem, i think i got Version 5.2.XX on all machines.
The version which comes with Boinc.
7) Message boards : ATLAS application : ATLAS vbox version 2.00 (Message 40183)
Posted 17 Oct 2019 by csbyseti
Post:
Version 2 works with Virtualbox fine on my machines, but it would be nice if the VM Console will show something at Alt-F2.

That's the best point to control working of the WU.
8) Message boards : ATLAS application : Another batch of faulty WUs? (Message 39944)
Posted 17 Sep 2019 by csbyseti
Post:
actual i got more than 80% faulty Atlas WU's, Stop working after a few minutes. With such big download-sizes not really funny.

Is it a problem of the WU data set or is it a problem with LHC infrastructure?

https://lhcathome.cern.ch/lhcathome/results.php?hostid=10488512&offset=0&show_names=0&state=0&appid=14

or

https://lhcathome.cern.ch/lhcathome/results.php?hostid=10493650&offset=0&show_names=0&state=0&appid=14
9) Message boards : Number crunching : Boinc memory estimate and LHC Settings (Message 35024)
Posted 16 Apr 2018 by csbyseti
Post:
try to show this behavior more clear:
Goal: 5 WU's with 3 CPUs activ.
Part of app_config.xml:
<app_name>ATLAS</app_name>
<version_num>100</version_num>
<platform>windows_x86_64</platform>
<avg_ncpus>3.000000</avg_ncpus>
<max_ncpus>3.000000</max_ncpus>
<plan_class>vbox64_mt_mcore_atlas</plan_class>
<api_version>7.7.0</api_version>
<cmdline>--memory_size_mb 5300</cmdline>
<dont_throttle/>
<is_wrapper/>
<needs_network/>
Boinc takes this values for the VM's

Start preference Settings:
maximum number of work: 7
maximum number of cpus: 4
--> only 4 activ WU's, no free WU in pipeline
switching maximum number of cpu's to 5
one more WU downloaded an started!!

16.04.2018 19:01:31 | LHC@home | [mem_usage] 9EmLDmoJHSsnlyackoJh5iwnABFKDmABFKDmLbmWDmABFKDmrxkvgm_0: WS 5.28MB, smoothed 6200.00MB, swap 135.92MB, 0.00 page faults/sec, user CPU 30.422, kernel CPU 27963.031
16.04.2018 19:01:31 | LHC@home | [mem_usage] bs0MDmwBHSsnyYickojUe11pABFKDmABFKDmz15UDmABFKDmwYmSkm_0: WS 5.45MB, smoothed 6200.00MB, swap 135.14MB, 0.00 page faults/sec, user CPU 25.297, kernel CPU 19489.547
16.04.2018 19:01:31 | LHC@home | [mem_usage] 3TLODmenHSsnlyackoJh5iwnABFKDmABFKDm4F1WDmABFKDmhJR8bn_0: WS 5.64MB, smoothed 6200.00MB, swap 136.42MB, 0.00 page faults/sec, user CPU 23.578, kernel CPU 15982.234
16.04.2018 19:01:31 | LHC@home | [mem_usage] 3hkNDmq0ISsnyYickojUe11pABFKDmABFKDmZkwVDmABFKDm0ln2Jo_0: WS 5.64MB, smoothed 6200.00MB, swap 136.80MB, 0.00 page faults/sec, user CPU 21.266, kernel CPU 11155.875
16.04.2018 19:01:31 | LHC@home | [mem_usage] 8pmMDmJVDSsnlyackoJh5iwnABFKDmABFKDmAW5UDmABFKDmv1aqLn_1: WS 9.58MB, smoothed 7100.00MB, swap 124.29MB, 0.00 page faults/sec, user CPU 1.734, kernel CPU 3.703
16.04.2018 19:01:31 | | [mem_usage] BOINC totals: WS 44.24MB, smoothed 31930.41MB, swap 1744.37MB, 0.00 page faults/sec
16.04.2018 19:01:31 | | [mem_usage] All others: WS 648.27MB, swap 7009.15MB, user 28834.250s, kernel 30899.547s
16.04.2018 19:01:31 | | [mem_usage] non-BOINC CPU usage: 0.53%

Boinc uses 6200MB for the older WU's with max cpus setting of 4 and 7100MB for the last WU with max cpus setting of 5 ( don't forgot: VM uses 5300MB)

And now, one of the "4" CPU's is completed, upload will start and, after upload, the next download start.

16.04.2018 19:24:40 | LHC@home | Finished download of RXBMDmCFKSsnlyackoJh5iwnABFKDmABFKDmMn2XDmABFKDmHwh6Ao_EVNT.13620778._000602.pool.root.1
16.04.2018 19:24:43 | | [mem_usage] enforce: available RAM 32714.59MB swap 40714.59MB
16.04.2018 19:24:43 | LHC@home | [cpu_sched_debug] enforce: task RXBMDmCFKSsnlyackoJh5iwnABFKDmABFKDmMn2XDmABFKDmHwh6Ao_0 can't run, too big 7100.00MB > 6850.85MB
16.04.2018 19:24:46 | LHC@home | [mem_usage] bs0MDmwBHSsnyYickojUe11pABFKDmABFKDmz15UDmABFKDmwYmSkm_0: WS 11.97MB, smoothed 6200.00MB, swap 135.25MB, 0.00 page faults/sec, user CPU 27.922, kernel CPU 23663.031
16.04.2018 19:24:46 | LHC@home | [mem_usage] 3TLODmenHSsnlyackoJh5iwnABFKDmABFKDm4F1WDmABFKDmhJR8bn_0: WS 11.13MB, smoothed 6200.00MB, swap 136.42MB, 0.00 page faults/sec, user CPU 26.766, kernel CPU 20132.906
16.04.2018 19:24:46 | LHC@home | [mem_usage] 3hkNDmq0ISsnyYickojUe11pABFKDmABFKDmZkwVDmABFKDm0ln2Jo_0: WS 10.73MB, smoothed 6200.00MB, swap 136.18MB, 0.00 page faults/sec, user CPU 24.219, kernel CPU 15326.422
16.04.2018 19:24:46 | LHC@home | [mem_usage] 8pmMDmJVDSsnlyackoJh5iwnABFKDmABFKDmAW5UDmABFKDmv1aqLn_1: WS 16.69MB, smoothed 7100.00MB, swap 136.03MB, 0.00 page faults/sec, user CPU 17.516, kernel CPU 2899.625
16.04.2018 19:24:46 | | [mem_usage] BOINC totals: WS 230.51MB, smoothed 25871.87MB, swap 1636.14MB, 0.00 page faults/sec
16.04.2018 19:24:46 | | [mem_usage] All others: WS 1776.46MB, swap 7308.82MB, user 28964.828s, kernel 31025.063s
16.04.2018 19:24:46 | | [mem_usage] non-BOINC CPU usage: 1.27%

New WU shows also the 7100MB value and this exceed the 32000MB of Memory.

This shows: the number of downloaded WU's only depend on the value in "maximum number of cpus" and not in "maximum number of work:"
But "maximum number of cpus" will also increase the amount of memory which Boinc uses for memory calculation

With this preference behaviour it's not possible to run more than 4 Atlas WU's at same time.

Sorry for the long text, hope it's more clear to fetch the problem
10) Message boards : Number crunching : Boinc memory estimate and LHC Settings (Message 34977)
Posted 12 Apr 2018 by csbyseti
Post:
I used Atlas for some month until the Upload problems occur.
It was no problem to calculate 5 Atlas WU's with 3 Threads on a computer with 32GB Ram.
After i restart LHC it was not possible (with same app_config.xml) to calculate more than 4 WU's at the same time.
Reason: Boinc used a 9100MB value for memory estimation. -> not enough memory for next active Task
When i reduce the "maximum number of CPUs" to 3 in the LHC setting Boinc uses the correct 5300MB value for memory estimation of every WU.

But now, i'll get only 3 WU's instead of the 8 WU's set in "maximum number of work" in the LHC setting.
So the memory estimation in Boinc will be fixed but boinc get not enough WU's to calculate 5 WU's at same time.

Running a cpu with 50 - 80% of load is wasting CPU-Power.
Running two different Projects is no option because of the ugly Boinc sheduler.

Please fix this behavior.
11) Message boards : ATLAS application : Uploads of finished tasks not possible since last night (Message 33423)
Posted 17 Dec 2017 by csbyseti
Post:
Thanks David for your work at weekend.

Some uploads finished at night some still suck in upload 100% but not finished.
In my opinion the bottleneck is the part between upload reached 100% and sending the 'upload OK and Task closed' to the Boinc client.
Don't know how server side Boinc works (perhaps copying the temp-upload file) but it look like the Boinc client will run in a timeout and set upload to faulty which result in a complete new upload.
If this happens at many clients you'll get a huge amount of upload load.

So if there is a timeout value in Boinc client, doubleling this value would help projects with big upload file size.
12) Message boards : ATLAS application : Uploads of finished tasks not possible since last night (Message 33390)
Posted 15 Dec 2017 by csbyseti
Post:
it's not a good idea to place new Atlas Task on server.
Even the new once placed today run into the upload failure.
So the size of the problem will increase only.
13) Message boards : ATLAS application : Uploads of finished tasks not possible since last night (Message 33341)
Posted 14 Dec 2017 by csbyseti
Post:
the new Boinc instance with new Task can upload (9 Task uploaded yet) but the older once have still the problem.
So it seem to be a Database problem and not a performane problem of the Server.
Instead of putting result in database the uploaded file goes to dev nul.
14) Message boards : ATLAS application : Uploads of finished tasks not possible since last night (Message 33329)
Posted 14 Dec 2017 by csbyseti
Post:
i think 24 task per Boinc Instance is the maximum.
Unlimited will result in a low number of Task (value forgotten)
The actual task need about 7 hours to finish so throughput will be ~ 17 Task per 24 hour (5 active instances with 3 Cores)

Started a new Boinc instance this morning, got new Task ( 1 Download Error ).
Let's see if upload of the new Task will finish ( first results in about 7 hours)
15) Message boards : ATLAS application : Uploads of finished tasks not possible since last night (Message 33320)
Posted 13 Dec 2017 by csbyseti
Post:
upload don't work correct, failed finish.
It looks like the Server don't accept the old WU's which are downloaded before the Server Crash.
In some hours all 24 WU's on both Ryzen machines are ready for upload, won't be nice if i have to delete them to get new WU's.
And most WU's are long running WU's, 4,2 GB of upload size on every machine.
16) Message boards : ATLAS application : Uploads of finished tasks not possible since last night (Message 33311)
Posted 13 Dec 2017 by csbyseti
Post:
.....There is an increase of load with the current ATLAS simulation campaign, so we will add another file server to increase capacity.


Thanks for Information.

i'll hope the new file Server will work soon, at the moment all uploads will stuck at 100% and not finish.
Intersting that upload will work but not the finishing. Seems to be the Database entry is blocked.

My WU-Cache will be empty in 10 hours and it's snowing......
17) Message boards : ATLAS application : Tasks of batch 12577096 have 200 Events (Message 33297)
Posted 13 Dec 2017 by csbyseti
Post:
it will be fine for me if the Atlas Task will be as big as possible.
The reason is that Atlas (or LHC) is a SSD killer.
At this Machine a Ryzen 1700 32GB Ram and 850 Evo 250GB only for Boinc i got 0,7 TBW on the System Disk and 20,5 TBW on the Boinc Disk.
It was build first week of August 2017 and not all the time running LHC Work Units.
So the Warranty value of 80TBW will be reached in 24 Month or earlier.
The small work Units will run only 1 hour (3 cores) and often produce a big 200MB download.
With 5 Task running at the same time more than 1GB in download are written in 1 hour.

Bigger WU's will reduce the amount of download written very much.
18) Message boards : ATLAS application : Uploads of finished tasks not possible since last night (Message 33292)
Posted 13 Dec 2017 by csbyseti
Post:
tested with Primegrid, no upload problem.

with my Telefon DSL line, 600 kbit upload speed, the upload won't start.
with the Kabel-DSL line, 10Mbit upload Speed, upload starts and goes to 100% but will not finish.
When Boinc restarts the upload, the upload will be reset and start at 0%
Upload Speed seems to be OK, limited to 200 KB for each Boinc session so 2 upload Task per session will run run with 100KB each.
19) Message boards : ATLAS application : Uploads of finished tasks not possible since last night (Message 33290)
Posted 13 Dec 2017 by csbyseti
Post:
got the same problem, upload goes to 100% but will not finish.
At morning i thought first it will be a problem of the Windows 10 update last night.
20) Message boards : ATLAS application : Only 6 concurrent tasks per computer? (Message 32402)
Posted 12 Sep 2017 by csbyseti
Post:
the 24hrs is the time to return the WU, so if you can run 1 at a time them you could have a buffer of 16-24WU's and still be fine.

If you had n CPU's then you could do n times this.

Since Atlas doesn't really use BOINC's task managment the most WU's you can have at anytime is 24 anyway.

In theory the 0.5 work buffer would store an additional 8-12WU's, so you could exceed the 24hr limit.

The max of 24 is only irritating to people with more than 24 cores, as they have more cores that the allowed WU limits, a better server setting would be to only allow 24WU/core/day


Sorry Toby Broom, but i don't understand what you mean. Especially the first sentence.
For Boinc Users is time to return the deadline, if you want to short return time you have to short deadline.
If CPU Power is not able to crunch latest WU in Buffer than User has to minimize Queue Depth.


Next 20


©2022 CERN