Message boards : ATLAS application : ATLAS out of beta
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
David Cameron
Project administrator
Project developer
Project scientist

Send message
Joined: 13 May 14
Posts: 322
Credit: 10,720,545
RAC: 5,947
Message 29474 - Posted: 21 Mar 2017, 8:46:02 UTC

Hi all,

I am happy to announce that the ATLAS app is finally out of beta. I would encourage everyone running ATLAS@Home to now migrate to LHC@Home. From now on only LHC@Home will have new tasks and features.

The credit from ATLAS@Home will be moved here once all ATLAS@Home tasks are finished.

Anyone who experiences problems with ATLAS should go through Yeti's very comprehensive checklist

Happy ATLAS crunching!
ID: 29474 · Report as offensive     Reply Quote
Profile Yeti
Volunteer moderator
Avatar

Send message
Joined: 2 Sep 04
Posts: 406
Credit: 96,567,558
RAC: 15,593
Message 29485 - Posted: 21 Mar 2017, 11:36:59 UTC - in response to Message 29474.  

So, the stats for Atlas on this page will be enabled now ?


Supporting BOINC, a great concept !
ID: 29485 · Report as offensive     Reply Quote
David Cameron
Project administrator
Project developer
Project scientist

Send message
Joined: 13 May 14
Posts: 322
Credit: 10,720,545
RAC: 5,947
Message 29488 - Posted: 21 Mar 2017, 12:00:48 UTC - in response to Message 29485.  

Not yet, but the LHC admins are working on it.
ID: 29488 · Report as offensive     Reply Quote
Erich56

Send message
Joined: 18 Dec 15
Posts: 1284
Credit: 23,181,202
RAC: 2,509
Message 29499 - Posted: 21 Mar 2017, 17:33:16 UTC

Is the crunching time for the "new" ATLAS-tasks now considerabely longer than before?
I am asking because on the PC where I crunch 1 ATLAS Task (1-core), before it took some 7-9 hours, now such a new task has been running for 21+ hours and is at status 72%.
A look into the console shows that everything should be okay.
FYI, I am NOT talking about one of the "longrunners".
ID: 29499 · Report as offensive     Reply Quote
Profile Yeti
Volunteer moderator
Avatar

Send message
Joined: 2 Sep 04
Posts: 406
Credit: 96,567,558
RAC: 15,593
Message 29500 - Posted: 21 Mar 2017, 17:44:26 UTC - in response to Message 29499.  
Last modified: 21 Mar 2017, 17:44:53 UTC

Is the crunching time for the "new" ATLAS-tasks now considerabely longer than before?
I am asking because on the PC where I crunch 1 ATLAS Task (1-core), before it took some 7-9 hours, now such a new task has been running for 21+ hours and is at status 72%.
A look into the console shows that everything should be okay.
FYI, I am NOT talking about one of the "longrunners".

Why are you asking the same question again and again ?

I stated you already to this post from David:

The current batch are still 100 events, but each event is more complicated due to a different kind of physics being simulated. On my PC each event takes 10 minutes so the total CPU time is around 16 hours.

So, what is not clear ?


Supporting BOINC, a great concept !
ID: 29500 · Report as offensive     Reply Quote
Erich56

Send message
Joined: 18 Dec 15
Posts: 1284
Credit: 23,181,202
RAC: 2,509
Message 29502 - Posted: 21 Mar 2017, 19:02:54 UTC - in response to Message 29500.  

Thanks for your friendly reply
ID: 29502 · Report as offensive     Reply Quote
Profile HerveUAE
Avatar

Send message
Joined: 18 Dec 16
Posts: 123
Credit: 37,120,806
RAC: 2
Message 29505 - Posted: 21 Mar 2017, 20:00:45 UTC

I am happy to announce that the ATLAS app is finally out of beta.
David, this is great news, and evidence that the work you guys at LHC have done over the last weeks has been a success.

Can you clarify any changes or decisions made while moving out of "beta testing":
- What is the formula being configured to define the default (server-side) memory setting based on the number of cores? Is it still 1.6 + 1 * ncores?
- Is there a decision that only "shortrunners" (i.e. 100 events) will be fed to the job queue, or will there still be a combination of short and long runners?
- If it is a combination of short and long runners, have you changed the VM memory setting (rsc_disk_bound)?
- When the volunteer cruncher has set the number of cores to 1, is the task run as a single core task (like "ATLAS Simulation" application in ATLAS@Home) or a multi-core set to run on 1 core (like "ATLAS Simulation Running on Multiple Core" application in ATLAS@Home). I have seen some question about this in some other thread but did not find any answer.
We are the product of random evolution.
ID: 29505 · Report as offensive     Reply Quote
David Cameron
Project administrator
Project developer
Project scientist

Send message
Joined: 13 May 14
Posts: 322
Credit: 10,720,545
RAC: 5,947
Message 29506 - Posted: 21 Mar 2017, 20:18:16 UTC - in response to Message 29505.  

I am happy to announce that the ATLAS app is finally out of beta.
David, this is great news, and evidence that the work you guys at LHC have done over the last weeks has been a success.

Can you clarify any changes or decisions made while moving out of "beta testing":
- What is the formula being configured to define the default (server-side) memory setting based on the number of cores? Is it still 1.6 + 1 * ncores?


Yes, this is still the formula, although I saw some complaints that this doesn't work for 2-cores so I may have to increase it.

- Is there a decision that only "shortrunners" (i.e. 100 events) will be fed to the job queue, or will there still be a combination of short and long runners?


Only shortrunners with 100 events but some of those shortrunners can run longer than others, depending on the task. If you check the "tasks progress" on the old ATLAS@home pages you can see that at the moment we have several different kinds of tasks running in parallel so there is a mixture of shorter and longer tasks.

As discussed earlier at some point we will look into a dedicated longrunners app for hardcore crunchers.

- If it is a combination of short and long runners, have you changed the VM memory setting (rsc_disk_bound)?
- When the volunteer cruncher has set the number of cores to 1, is the task run as a single core task (like "ATLAS Simulation" application in ATLAS@Home) or a multi-core set to run on 1 core (like "ATLAS Simulation Running on Multiple Core" application in ATLAS@Home). I have seen some question about this in some other thread but did not find any answer.


There will be only one app for single and multicore. On ATLAS@Home the exact same tasks were run on both apps and the split was mainly to have different memory for single and multicore. With the new formula here the memory should be ok for both single and multicore within the same app. So I suppose the answer to your question is that it's like running 1-core in the multicore app.
ID: 29506 · Report as offensive     Reply Quote
Profile HerveUAE
Avatar

Send message
Joined: 18 Dec 16
Posts: 123
Credit: 37,120,806
RAC: 2
Message 29508 - Posted: 21 Mar 2017, 20:43:34 UTC
Last modified: 21 Mar 2017, 20:54:45 UTC

Thanks David.

Yes, this is still the formula, although I saw some complaints that this doesn't work for 2-cores so I may have to increase it.
I think it is urgent for the community to manually test various settings and get to a conclusion regarding this topic. I am of the opinion that the setting should be 4400 MB flat for any number of cores.
It would be sad to see LHC@Home users running away from ATLAS application because it does not work on their machines with 2 cores configured. Especially because I assume that the motivation to merge ATLAS with LHC is to actually get a larger community participating.

So I suppose the answer to your question is that it's like running 1-core in the multicore app.
If it is a fact that those 1-core applications can run with less than 3000 MB, then setting a formula on the server side to get the right memory setting may be tricky:
- 1 core: 2600 OK?
- 2 cores: 4400 OK.
- 4 cores: 4400 OK.
- 8 cores: 4400 OK.

Edit: 2 cores: 4300 OK. And 4300 may be OK for higher number of cores as well.
We are the product of random evolution.
ID: 29508 · Report as offensive     Reply Quote
Dave Peachey

Send message
Joined: 9 May 09
Posts: 17
Credit: 772,975
RAC: 0
Message 29511 - Posted: 21 Mar 2017, 21:18:01 UTC
Last modified: 21 Mar 2017, 21:46:32 UTC

Evening All,

I stayed away from the beta testing (I've done enough of that elsewhere) but my first two "production" ATLAS Simulation WUs seem to have gone off without a hitch with two more on the way ... it's looking OK so far.

I've used the same formula in the "app_config.xml" file as under the ATLAS@Home project albeit I found that the <name>, <app_name> and <plan_class> parameter names were slightly different <name> and <app_name> both "ATLAS" (instead of "ATLAS_MCORE") and <plan_class> is "vbox64_mt_mcore_atlas" (instead of "vbox_64_mt_mcore"). I also notice that the .vdi file is slightly bigger at 1.62GB (for the current "2017_03_01" image) instead of 1.35GB (for the old ATLAS@Home "1.04" multicore image).

I've let them run with six of my sixteen cores as before (so assigning 7300MB memory for a six core session) and the timing is within a few minutes either side of what I was doing previously (around 60 minutes) so I'm guessing I've had the "standard" 100 event WUs thus far.

Cheers
Dave
ID: 29511 · Report as offensive     Reply Quote
Crystal Pellet
Volunteer moderator
Volunteer tester

Send message
Joined: 14 Jan 10
Posts: 962
Credit: 6,351,764
RAC: 369
Message 29512 - Posted: 21 Mar 2017, 21:31:03 UTC - in response to Message 29508.  

Edit: 2 cores: 4300 OK. And 4300 may be OK for higher number of cores as well.

Wrong! It will run with 4300MB, but the VM will keep 1 core idle.
A 2-core VM will use only 1 core
A 3-core VM will use only 2 cores

So 4400MB of RAM is the bare minimum for efficient running a multi-core VM.
ID: 29512 · Report as offensive     Reply Quote
David Cameron
Project administrator
Project developer
Project scientist

Send message
Joined: 13 May 14
Posts: 322
Credit: 10,720,545
RAC: 5,947
Message 29519 - Posted: 22 Mar 2017, 5:50:51 UTC - in response to Message 29508.  

If it is a fact that those 1-core applications can run with less than 3000 MB, then setting a formula on the server side to get the right memory setting may be tricky:
- 1 core: 2600 OK?
- 2 cores: 4400 OK.
- 4 cores: 4400 OK.
- 8 cores: 4400 OK.

Edit: 2 cores: 4300 OK. And 4300 may be OK for higher number of cores as well.


It is tricky because BOINC only allows setting an offset and multiple of number of cores in the memory formula. Using a flat 4400 for all cores would exclude those who have only 4GB of RAM and can currently run single-core (although in my own experience it's not practical to run ATLAS if you have only 4GB). I suspect the LHC tasks with the new software version use more base memory but are more efficient at sharing memory between processes - I am quite surprised that 8 cores can run on only 4400MB.

I have now changed the formula to 2.6 + 0.8 * cores, which is almost the same at the ATLAS@Home multicore tasks. I hope someone can confirm that 4200MB is ok for 2-core.
ID: 29519 · Report as offensive     Reply Quote
Erich56

Send message
Joined: 18 Dec 15
Posts: 1284
Credit: 23,181,202
RAC: 2,509
Message 29520 - Posted: 22 Mar 2017, 6:28:51 UTC - in response to Message 29519.  

... (although in my own experience it's not practical to run ATLAS if you have only 4GB).

I can confirm this, unfortunately.
While before, it was not problem to run 1 single-core task on my 4GB notebook, now, with the new setting, the system runs out of RAM and crashes after some 10-12 minutes :-(

Too bad that I can no longer crunch ATLAS on this notebook :-(
ID: 29520 · Report as offensive     Reply Quote
Crystal Pellet
Volunteer moderator
Volunteer tester

Send message
Joined: 14 Jan 10
Posts: 962
Credit: 6,351,764
RAC: 369
Message 29521 - Posted: 22 Mar 2017, 6:51:48 UTC - in response to Message 29519.  
Last modified: 22 Mar 2017, 7:49:29 UTC

I have now changed the formula to 2.6 + 0.8 * cores, which is almost the same at the ATLAS@Home multicore tasks. I hope someone can confirm that 4200MB is ok for 2-core.

I tested dual cores as described before all the way from 3600MB up to 4400MB and tasks 3600-4200MB fail, 4300MB was inefficient (keeps 1 core idle), only 4400MB was successful.

For retesting I started a dual core with 4200MB of RAM now without using an app_config.xml.

Edit: Validate error - https://lhcathome.cern.ch/lhcathome/result.php?resultid=127460232

So nothing left as to change the formula to 2.6 + 0.9 * cores
ID: 29521 · Report as offensive     Reply Quote
maeax

Send message
Joined: 2 May 07
Posts: 964
Credit: 34,079,004
RAC: 8,481
Message 29525 - Posted: 22 Mar 2017, 12:44:13 UTC

Same for me, 2 CPU's no app_config. 4.200 MByte in use.

https://lhcathome.cern.ch/lhcathome/result.php?resultid=127551821
ID: 29525 · Report as offensive     Reply Quote
Profile HerveUAE
Avatar

Send message
Joined: 18 Dec 16
Posts: 123
Credit: 37,120,806
RAC: 2
Message 29532 - Posted: 22 Mar 2017, 20:14:02 UTC
Last modified: 22 Mar 2017, 20:14:33 UTC

So nothing left as to change the formula to 2.6 + 0.9 * cores
Or maybe 4.2 + 0.1 * cores.
This way you have 4.3 Gbytes for 1 core (compared to 3.5 Gbytes, which is difficult to run on a 4 Gbytes machine anyway).
And for 8 cores, you get 5 Gbytes instead of nearly the double (9.9 Gbytes).

And make it clear that the minimum requirement is 6 or 8 Gbytes RAM to run ATLAS.
We are the product of random evolution.
ID: 29532 · Report as offensive     Reply Quote
Profile HerveUAE
Avatar

Send message
Joined: 18 Dec 16
Posts: 123
Credit: 37,120,806
RAC: 2
Message 29534 - Posted: 22 Mar 2017, 20:25:49 UTC

I am quite surprised that 8 cores can run on only 4400MB

I tried it only once, after trying a couple of 4-cores with 4400MB. I won't be able to test this setting again in the coming 12 days, but I'll do more tests when back home.
We are the product of random evolution.
ID: 29534 · Report as offensive     Reply Quote
Crystal Pellet
Volunteer moderator
Volunteer tester

Send message
Joined: 14 Jan 10
Posts: 962
Credit: 6,351,764
RAC: 369
Message 29569 - Posted: 23 Mar 2017, 18:40:04 UTC - in response to Message 29519.  
Last modified: 23 Mar 2017, 18:41:23 UTC

David Cameron wrote:
... I am quite surprised that 8 cores can run on only 4400MB.

I have now changed the formula to 2.6 + 0.8 * cores, which is almost the same at the ATLAS@Home multicore tasks. I hope someone can confirm that 4200MB is ok for 2-core.

I tested an 8-core ATLAS task again with 4400MB of RAM and it was running fine. https://lhcathome.cern.ch/lhcathome/result.php?resultid=127836501



The picture above shows 85% performance going down to 81% after the Event-processing part and cleaning up.

About your formula. The 2-core VM with 4200MB will not run as I wrote before.
ID: 29569 · Report as offensive     Reply Quote
PHILIPPE

Send message
Joined: 24 Jul 16
Posts: 88
Credit: 239,917
RAC: 0
Message 29604 - Posted: 24 Mar 2017, 21:13:43 UTC - in response to Message 29569.  
Last modified: 24 Mar 2017, 21:16:27 UTC

The software , you use seems to be an accurate tool.
We can also guess the cpu efficiency with the times given in the columns of the task lists.
127836501 61505996 23 Mar 2017, 15:48:25 UTC 23 Mar 2017, 18:33:24 UTC Terminé et validé 9,200.58 61,219.96 132.94 ATLAS Simulation v1.01 (vbox64_mt_mcore_atlas)
windows_x86_64

----------------------------------------------------------------------------------------------------------------------------------------------------------------
Formula simplified is : 100 * ( Run time ) / ( elapsed time * number of cores ) = cpu efficiency of the wu
----------------------------------------------------------------------------------------------------------------------------------------------------------------
In this case ,we found :100 * ( 61219.96 ) / ( 9200.58 * 8 ) = 83.17 % 'very close to the result displayed.
The most interesting is the ability provided by this tool to see the dynamic behaviour of the cpu efficiency of the wu during its life.
If i understand your speech about the dynamical average value , it should be :
At the beginning : 100 / 8 = 12.5 -> -> -> 12.5%
At the running : 12.5 -> -> -> -> -> -> -> -> -> -> -> -> -> -> -> -> -> -> -> -> -> -> -> 85%
At the ending : 85 -> -> 81%
Thanks to give this information.
ID: 29604 · Report as offensive     Reply Quote
Erich56

Send message
Joined: 18 Dec 15
Posts: 1284
Credit: 23,181,202
RAC: 2,509
Message 29608 - Posted: 25 Mar 2017, 5:03:19 UTC

above example shows that the CPU efficiency drops with the number of cores per WU.

On one of my machines, I run WUs at 2 cores each, and the result is:

Runtime: 145.208 secs
Elapsed time: 73.964 secs
resulting in a CPU efficiency of 98,16%
ID: 29608 · Report as offensive     Reply Quote
1 · 2 · Next

Message boards : ATLAS application : ATLAS out of beta


©2020 CERN