Message boards :
ATLAS application :
How is Work-Distribution calculated ?
Message board moderation
Author | Message |
---|---|
Send message Joined: 2 Sep 04 Posts: 453 Credit: 193,369,412 RAC: 10,065 |
Hi, I'm a little bit irritated about Work-Distribution of Atlas-Work. All my clients get 10 WUs. Regardless how powerfull or slow the individual workstation is. So, my slowest PCs has enough work for up to 2 days. My fastest PC has work for max 6 hours. This are my LHC-Specific-preferences: As long as boxes have 10 WUs local, the server tells "No Atlas work available". If they have less workunits, then we get exact the difference to 10. What to do to get more work on my Power-Machines ? Supporting BOINC, a great concept ! |
Send message Joined: 28 Sep 04 Posts: 674 Credit: 43,152,472 RAC: 15,698 |
Interesting. On my faster host I have set also the task limit to 'No limit' but my number of CPUs is set to 4 and I always get 8 tasks. The host is also running Theory tasks (to keep CPU cores busy) and also gets always 8 of those. I have set with app_config.xml to run Atlas only on 1 CPU core. I am using 12 of my 16 core for CPU tasks so it varies from 8 Atlas + 4 Theory to 4 Atlas + 8 Theory based on FIFO. So you could experiment with number of CPUs in your preferences. This affects the amount of memory Boinc thinks each task is using (actual memory used can be set with app_config). |
Send message Joined: 2 Sep 04 Posts: 453 Credit: 193,369,412 RAC: 10,065 |
So you could experiment with number of CPUs in your preferences. This affects the amount of memory Boinc thinks each task is using (actual memory used can be set with app_config).Nope, I can't play with the number of CPUs. If I raise this figure the Working-Set-Size of each Workunit will raise up to 10.200 MB. With 5-CPUs the Working-Set-Size is 7.500 MB. The memory-setting in app_config is only responsable for the memory-setting of the Virtual-Machine. The BOINC-Client reserves the memory that is set by Working-Set-Size, even if the Virtual-Machine needs less memory Supporting BOINC, a great concept ! |
Send message Joined: 28 Sep 04 Posts: 674 Credit: 43,152,472 RAC: 15,698 |
So you could experiment with number of CPUs in your preferences. This affects the amount of memory Boinc thinks each task is using (actual memory used can be set with app_config).Nope, I can't play with the number of CPUs. If I raise this figure the Working-Set-Size of each Workunit will raise up to 10.200 MB. With 5-CPUs the Working-Set-Size is 7.500 MB. That's what I meant with my memory comment, I just couldn't formulate it so precisely. :-) So it looks like we are getting 2 x Max #CPUs tasks to calculate even if have set 'No limit' for 'Max # jobs'. This applies to Atlas and Theory but not to sixtrack and CMS, I think that sixtrack is limited by the cache size if I remember correctly (hard to test as there are no tasks available). CMS follows its own rules (I don't know if anybody knows what they are) and you get over a hundred of those if you have 'No limit' for Max # jobs'. This is a very confusing topic. |
Send message Joined: 13 May 14 Posts: 387 Credit: 15,314,184 RAC: 0 |
Hi Yeti, nice to have you back :) There is a limitation on the server side for ATLAS and Theory to send out max 2 tasks per CPU. I have asked the admins to increase this to 4 for ATLAS. I would rather not remove the limits completely since many hosts will end up with tasks they will not be able to process before the deadline. |
Send message Joined: 28 Sep 04 Posts: 674 Credit: 43,152,472 RAC: 15,698 |
Hi Yeti, nice to have you back :) The setting of max # of jobs should take care of that problem if it were possible to set that higher than 8. Now new high performance host owners have to set it to 'No limit'. |
Send message Joined: 19 Feb 08 Posts: 708 Credit: 4,336,250 RAC: 0 |
What do you mean by CPU? I have a 6 processors CPU, I think it has three cores with 2 threads each. I get 4 Theory tasks and one Atlas task, all done and validated. Tullio |
Send message Joined: 2 May 07 Posts: 2071 Credit: 156,140,797 RAC: 105,338 |
This is a thread from 2018 for HyperThreading and Cpu's: https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4616 |
Send message Joined: 2 Sep 04 Posts: 453 Credit: 193,369,412 RAC: 10,065 |
Hi Yeti, nice to have you back :)Yeah, feels good to be back again There is a limitation on the server side for ATLAS and Theory to send out max 2 tasks per CPU. I have asked the admins to increase this to 4 for ATLAS. I would rather not remove the limits completely since many hosts will end up with tasks they will not be able to process before the deadline. HM, 4 is better than two, but it is not really the optimize. At the moment, "Max # of CPUs" is used for three things 1* sets the number of cores, a WU should use, when there is no override by app_config 2* gives the base for "Working Set Size" 3* is taken to calculate, how much WUs a clients gets (multiplicated with 2 now, in short future with 4)
Supporting BOINC, a great concept ! |
Send message Joined: 19 Feb 08 Posts: 708 Credit: 4,336,250 RAC: 0 |
In QuChemPedlA@home I have wingmen, mostly AMD Ryzen Threadripper, that have 160 processors. They run native Linux and my humble 6 processor i5 9400F, running on VirtualBox because it is on a Windows 10 PC, can compete with them in CPU times. Tullio |
Send message Joined: 13 Jul 05 Posts: 165 Credit: 14,925,288 RAC: 34 |
Hi, I would rather not remove the limits completely since many hosts will end up with tasks they will not be able to process before the deadline.I'm not sure I understand this: a "maximum" is a limit, not a requirement that so many tasks be downloaded. We've seen the same issue with CMS. There's already a mechanism in BOINC for requesting how much work to download: the local cache length ("Store at least" and "Store up to an additional" ... days of work).. This works with Sixtrack, and both CMS and Atlas have steady, repeatably-sized tasks compatible with this approach. Respecting this user configuration setting would let the project raise the Max # jobs without having to worry about the side-effect on smaller machines. |
Send message Joined: 2 May 07 Posts: 2071 Credit: 156,140,797 RAC: 105,338 |
("Store at least" and "Store up to an additional" ... days of work).. This works with Sixtrack, and both CMS and Atlas have steady, repeatably-sized tasks compatible with this approach. Respecting this user configuration setting would let the project raise the Max # jobs without having to worry about the side-effect on smaller machines. Have store at least 0.5 day and Zero for store up to an additional. It is always a Task-mix of WCG and Atlas and/or Theory. All is running well. |
©2024 CERN