Message boards :
ATLAS application :
Are there Atlas tasks using more than 8 cores?
Message board moderation
Author | Message |
---|---|
Send message Joined: 12 Aug 06 Posts: 418 Credit: 5,667,249 RAC: 48 |
I have a 24 core Xeon computer, but it downloaded three 8 core Atlas tasks instead of one 24 core task. Is there a limit of 8 cores for an Atlas? |
Send message Joined: 15 Jun 08 Posts: 2386 Credit: 222,901,896 RAC: 138,075 |
Is there a limit of 8 cores for an Atlas? Yes. ATLAS vbox is limited to 8 cores, ATLAS native (linux only) is limited to 12 cores. Tasks using less cores are usually more efficient and should be preferred as long as the computer has enough RAM to run more of them concurrently. In your case 16 GB RAM is the limiting factor. |
Send message Joined: 12 Aug 06 Posts: 418 Credit: 5,667,249 RAC: 48 |
Yes. I see, thanks. I thought I'd remembered someone saying they had a 32 core process running. Tasks using less cores are usually more efficient and should be preferred as long as the computer has enough RAM to run more of them concurrently. That will be sorted shortly. I have another 32GB in the post (to share between the two of them). If need be, they'll both take 128GB each. But.... if less cores is more efficient, why are they not being handed out like that? If I had more RAM, would I have received twice the number of 4 core tasks? And since it's often Atlas shared with Theory or another project on a machine, wouldn't it be better to always hand out lower core tasks? And how can they be more efficient anyway? I can see they're actually fully using all the cores, so they must be doing something useful? Apologies for all the questions :-) |
Send message Joined: 15 Jun 08 Posts: 2386 Credit: 222,901,896 RAC: 138,075 |
Far in the past even Theory vbox was handed out as a multicore app, but it worked completely different compared to ATLAS. ATM ATLAS is the only LHC@home app that can be set up as multicore. This has been introduced to allow ATLAS to use more cores on computers with low physical RAM. It simply saves vbox/OS overhead (each VM runs its own Linux instance). The reason why the default setting is to use as many cores as possible (up to 8) can be seen as a historic decision. There are still pros and cons, hence I don't expect a change here. Efficiency Each ATLAS task transfers 200 "Events" to "Hits" using a real POSIX thread per Event. #threads corresponds to #cores used to set up the VM. Since runtime calculations per event differ (some seconds <-> >1/2 h) worst case on a 2-core setup could be: - event 200 has just started and takes, say 1200 s - event 199 finishes 1 s after event 200 has started In this case the VM will still allocate 2 cores but run only 1 thread (+some small OS overhead) for 1199 s. You may expand this example for a 3-core, 4-core, ..., 8-core setup. To get an impression regarding the runtime variability I suggest to watch the ATLAS event progress monitoring available on ALT-F2 (ATLAS vbox). In addition during startup/download phase and shutdown phase (the latter includes preparation of the result file) the VM runs a singlecore process. |
Send message Joined: 12 Aug 06 Posts: 418 Credit: 5,667,249 RAC: 48 |
Far in the past even Theory vbox was handed out as a multicore app, but it worked completely different compared to ATLAS. Since the 8 core Atlas uses 10GB (or asks for 10GB anyway), can I assume a 4 core one would also require 10GB? And a 1 core one? So if I ran 24 1 core Atlases, I'd need 240GB of RAM! Anyway, whenever I've looked at an Atlas task, it seems to use all 8 cores pretty much all the time, so I'm happy with that. For some reason in Windows 10, a single core program tends to use just over a core (and on one of my machines about 1.75 cores!), so even if some cores get left idle, they get absorbed by something else. And I do like the tasks finishing quicker with more cores :-) |
Send message Joined: 28 Sep 04 Posts: 674 Credit: 43,150,234 RAC: 15,996 |
There is a formula to count the memory requirement for Atlas tasks. If I remember correctly it is 3000 MB + numcores x 900 MB. So 1 CPU core requires 3900 MB, 2 cores 4800 MB, 3 cores 5700 MB etc. |
Send message Joined: 12 Aug 06 Posts: 418 Credit: 5,667,249 RAC: 48 |
There is a formula to count the memory requirement for Atlas tasks. If I remember correctly it is 3000 MB + numcores x 900 MB. So 1 CPU core requires 3900 MB, 2 cores 4800 MB, 3 cores 5700 MB etc. Ok, so still 93GB if I used 1 core Atlases. I'll let the server do what the boffins decided was best - 8 cores. Does anyone know why I hardly ever get them? Almost every task handed out is Theory, yet there are more Atlas tasks in the queue. |
Send message Joined: 15 Jun 08 Posts: 2386 Credit: 222,901,896 RAC: 138,075 |
... what the "Scientists" would have been more reasonable I presume. Does anyone know why I hardly ever get them? Almost every task handed out is Theory, yet there are more Atlas tasks in the queue. When your work request arrives the server goes through the ready to send queue and one by one it adds tasks from there to the reply list until it is completely filled. Tasks that don't meet all requirements are skipped, e.g. unchecked apps or wrong app versions ... Even if your preferences are correctly set it may happen that other computers requested ATLAS tasks right before your's and you get what they left in the queue. |
Send message Joined: 12 Aug 06 Posts: 418 Credit: 5,667,249 RAC: 48 |
... what the It's a simile of scientist. Maybe only used in the UK? When your work request arrives the server goes through the ready to send queue and one by one it adds tasks from there to the reply list until it is completely filled. Tasks that don't meet all requirements are skipped, e.g. unchecked apps or wrong app versions ... Yes I thought it worked like that, but for me to get only Theory suggests there's a lot of folk requesting only Atlas, leaving a huge number of Theories at the front of the queue. Even when I forced Atlas only, to test my machines, accidentally leaving on the tickbox for get other work if that isn't available, I got half of each. that doesn't make sense at all. |
©2024 CERN