1) Message boards : Sixtrack Application : please, remove non-optimized application SixTrack for 32 bit systems (Message 42648)
Posted 19 hours ago by Harri Liljeroos
Post:
Have you viewed your computer performance details here on the web-site? Click your Host details and select 'Application details'. There is a list of performance figures for all applications run on your Host. The server selects the fastest application (highest Average processing rate) for your Host when it sends you tasks. This isn't very precise method as sixtrack tasks vary a lot in size and runtime. That page shows that there aren't that much difference in application performance (server's opinion) and a bunch of long tasks may change the GFLOP numbers down and server thinks that an other application would be faster and selects that one.

Even if numbers has been stabilized to favor one application, every once in a while server sends tasks to other applications just to check if there has been changes to the Host that would favor another application.
2) Message boards : ATLAS application : So this is how my PC perform (Message 42541)
Posted 9 days ago by Harri Liljeroos
Post:
We (in the Boinc environment) are not actually calculating any collision results from the Atlas experiment. We are calculating different kinds of simulations for the Atlas experiment. I think that the collision results are all calculated by the Grid.
3) Message boards : Theory Application : No Tasks (Message 42539)
Posted 9 days ago by Harri Liljeroos
Post:
Re-using an old thread here. No tasks are available for Theory according to Server status page and none has been downloaded either since this afternoon.
4) Message boards : ATLAS application : Are there Atlas tasks using more than 8 cores? (Message 42461)
Posted 15 days ago by Harri Liljeroos
Post:
There is a formula to count the memory requirement for Atlas tasks. If I remember correctly it is 3000 MB + numcores x 900 MB. So 1 CPU core requires 3900 MB, 2 cores 4800 MB, 3 cores 5700 MB etc.
5) Message boards : Theory Application : New Version 300.06 (Message 42435)
Posted 16 days ago by Harri Liljeroos
Post:
What also don't seem right is that the progress is reported as a calculation of 'runtime/max runtime' which means that many tasks finish after about 1% progress.
Have a read here.

Thank you for this tip. I had already forgot about that. I set it to 24 hours, let's see how many gets aborted because of that.
6) Message boards : Theory Application : New Version 300.06 (Message 42430)
Posted 16 days ago by Harri Liljeroos
Post:
The native Theory 300.06 are running fine for me (Ubuntu 18.04.4). One is at the two hour mark now, and shows 23 minutes remaining.
Several have finished at from 13 to 30 minutes.

OK, I am running only VBox tasks, so I was talking about them. I didn't check the native application version number.

What also don't seem right is that the progress is reported as a calculation of 'runtime/max runtime' which means that many tasks finish after about 1% progress.
7) Message boards : Theory Application : New Version 300.06 (Message 42420)
Posted 17 days ago by Harri Liljeroos
Post:
The new tasks for 300.06 show estimated runtime of 240 hours if it is not finished within about an hour. This setting cannot work with the 10 day deadline (= 240 hours) as tasks will go to high priority mode. I know that Theory tasks do not usually take that long but Boinc doesn't know that.

300.05 had that estimate as 100 hours and Boinc did not get any better at estimating the runtime even it had run hundreds of tasks The runtime estimate method has to be improved to reflect actual runtime needed or at least make the deadline longer (24 days).
8) Message boards : Sixtrack Application : Tasks available / tasks not available (Message 42369)
Posted 26 days ago by Harri Liljeroos
Post:
Santa McIntosh ?


All your computers run Windows. Who is this McIntosh?

Eric McIntosh
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4296#30572
9) Message boards : Theory Application : Some WUs with fractional core usage? (Message 42321)
Posted 28 Apr 2020 by Harri Liljeroos
Post:
This value cannot control how much of a CPU a task will actually use. Application will use as much of CPU as it needs. This value controls how many tasks Boinc can run simultaneously until the core limit kicks in. You can overrule the server values in an app_config.xml and force Boinc to reserve 1 CPU core for each Theory task if you like. Here's the manual for it: https://boinc.berkeley.edu/wiki/Client_configuration#Project-level_configuration
10) Message boards : Theory Application : Some WUs with fractional core usage? (Message 42308)
Posted 27 Apr 2020 by Harri Liljeroos
Post:
P.S. it moaned my history was too large when I viewed the history of all projects on all computers. I guess that's due to the Milkyway Seperation WUs that take only 25 seconds per graphics card - one host downloads 900 WUs at a time. I wonder why they don't make them larger. It used to be even worse - they're now in bundles of 4 or 5, they used to take only 6 seconds.

I have seen those complaints also when there was a lot of short tasks from sixtrack and Seti. So I just reduced the number of days to store in History. I have two host (Windows) and both have their own BoincTask running.
11) Message boards : Theory Application : Some WUs with fractional core usage? (Message 42304)
Posted 27 Apr 2020 by Harri Liljeroos
Post:
Where do you see this?


I use Boinctasks, which shows it in the "Use" column. Anything on one CPU core has nothing in that column. Things like Atlas show 6C to show it's using 6 cores. Things running on GPUs like Einstein show "0.5C + 1ATI", meaning half a CPU core and 1 ATI graphics card.

In the Boinc Manager, I guess (as I don't normally look at it) it would be in the "Status" column, which similarly shows "Running" for 1 CPU core, or anything else is described as "Running (0.5 CPUs + 1 AMD/ATI GPUs)".

I am using BoincTasks as well, but I have never seen something like that. I just scrolled thru the History tab as well but nothing like that has visited my hosts in the past couple of weeks.
12) Message boards : Theory Application : Some WUs with fractional core usage? (Message 42301)
Posted 27 Apr 2020 by Harri Liljeroos
Post:
Where do you see this?
13) Message boards : Theory Application : Tasks run 4 days and finish with error (Message 42285)
Posted 25 Apr 2020 by Harri Liljeroos
Post:
I think the GPU apps cannot keep in memory what was happening in GPU if the calculation is interrupted or at least they could not utilize it correctly and it caused some problems. So GPU's memory is released if computation is suspended.

The Virtualbox applications have double saving system: They checkpoint in Boinc and save the virtual machine status in VirtualBox Manager but I don't know if or how the Boinc checkpoint transfers to the Virtualbox when calculation resumes. I know that if you have several VirtualBox machines running from an traditional hard drive and you start or stop Boinc you can swamp the hard drive I/O and some of the Virtual Machines can fail and have unrecoverable error. SSD's can handle this disk I/O better.


I don't see why that would happen. If the disk is busy surely the VM just has to wait? Or is Boinc not allowing it long enough to save?

One thing I have noticed, if I shut my computer down while a VM is running, Windows warns me it's not closed, waiting doesn't help. There seems to be some sort of bug in it. I wonder if the same thing happens when Boinc instructs it to close when it swaps to another project?

For Boinc shutting down there is one minute time to shut everything down, otherwise you get an error (at least from BoincTasks).

Yes, the VirtualBox Interface is usually the one that don't shutdown. I then kill it manually from Windows Task Manager. So far I haven't lost any task because of that.
14) Message boards : Theory Application : Tasks run 4 days and finish with error (Message 42283)
Posted 25 Apr 2020 by Harri Liljeroos
Post:
I think the GPU apps cannot keep in memory what was happening in GPU if the calculation is interrupted or at least they could not utilize it correctly and it caused some problems. So GPU's memory is released if computation is suspended.

The Virtualbox applications have double saving system: They checkpoint in Boinc and save the virtual machine status in VirtualBox Manager but I don't know if or how the Boinc checkpoint transfers to the Virtualbox when calculation resumes. I know that if you have several VirtualBox machines running from an traditional hard drive and you start or stop Boinc you can swamp the hard drive I/O and some of the Virtual Machines can fail and have unrecoverable error. SSD's can handle this disk I/O better.
15) Message boards : Theory Application : Tasks run 4 days and finish with error (Message 42272)
Posted 24 Apr 2020 by Harri Liljeroos
Post:
I assume Virtualbox is ok with it being suspended a lot? As the machine I run them on does Rosetta aswell and Boinc tends to switch about a lot. I have Boinc set to switch between applications every 60 minutes. Not sure why it's switching so much, as I have 4 cores allocated to Boinc, with Rosetta set to 3 times the weight of LHC, so you'd think it would just leave 1 core permanently on LHC and 3 on Rosetta.


Actually some of the VirtualBox tasks do not allow suspension at all. They will just start from the beginning again. I think that Atlas and Theory tasks aren't so picky for short suspensions. Remember to have selected in your computing preferences 'Leave non-GPU tasks in memory while suspended'. Suspended tasks will then have a better chance to finish successfully.
16) Message boards : Theory Application : Tasks run 4 days and finish with error (Message 42249)
Posted 21 Apr 2020 by Harri Liljeroos
Post:
It does work for me. Just put the bat-file to the \ProgramData\BOINC\projects\lhcathome.cern.ch_lhcathome directory (normally a hidden directory in Windows) and start it from there.
Thanks, Harri.
Just one question: can this be done while a Theory task is running, or should there be no running Theory tasks, or should BOINC even be closed?

This can be done at any time. That bat-file will actually keep looping and checking all downloaded LHC tasks until it is manually closed. It will abort all downloaded (filename ending with _0, _1 or _2) sherpa tasks that were downloaded. Normally it will abort them before they start.
The loop is every 120 seconds. It looks a bit odd as the screen remains blank until it finds something to abort, it will then report abortion on the screen. So just leave the script running.
17) Message boards : Theory Application : Tasks run 4 days and finish with error (Message 42245)
Posted 20 Apr 2020 by Harri Liljeroos
Post:
It does work for me. Just put the bat-file to the \ProgramData\BOINC\projects\lhcathome.cern.ch_lhcathome directory (normally a hidden directory in Windows) and start it from there.
18) Message boards : Theory Application : Theory task true completion time? (Message 42229)
Posted 18 Apr 2020 by Harri Liljeroos
Post:
Yes, some of these Theory tasks tend to take a long time but some will finish in 15 minutes. These tasks have a cut of time for 100 hours (~4 days). Some don't finish even in that time but use full CPU all the time, some will run the 100 hours but stop using the CPU very early on. So after 100 hours they get aborted and you don't get any credit for them. If you read these forums you'll see that they are full of complaints about that. Unfortunately there are very little we users can do about it.
19) Message boards : Theory Application : Tasks run 4 days and finish with error (Message 42198)
Posted 16 Apr 2020 by Harri Liljeroos
Post:
Is there anyway we can ask to not be distributed the long sherpa tasks. Ive been keeping an eye on my system and for every good sherpa task I get 5 bad ones that have to be aborted.

Some time ago there was posted here a script that would abort all sherpa tasks if you happened to download one. I just can't find that post now.
20) Message boards : ATLAS application : Change in Credit? (Message 42002)
Posted 27 Mar 2020 by Harri Liljeroos
Post:
Mine too since I added Theory to the mix with Atlas.


Next 20


©2020 CERN