Message boards :
Number crunching :
Project balance
Message board moderation
Author | Message |
---|---|
Send message Joined: 27 Sep 08 Posts: 798 Credit: 644,719,384 RAC: 234,550 |
I would like to contribute equally to the sub projects, as I allow tasks from all projects. At the moment it looks like my the projects have settled in to run as many CMS tasks as possiable with 2 Atlas and 2 Theory, my computer has queued up plenty more CMS with what looks like 1 in and 1 out for the other projects. Anyone know how to tweek these balances? |
Send message Joined: 16 Sep 17 Posts: 100 Credit: 1,618,469 RAC: 0 |
As far as I know it's not possible to tweak the balance beyond the capabilities that an app_config would provide (i.e. equally and permanently assigning cores) |
Send message Joined: 28 Sep 04 Posts: 674 Credit: 43,150,492 RAC: 15,942 |
I have also seen the balance between subprojects oscillating when time moves on. Sometimes it is mainly Atlas, sometimes mainly Theory or sixtrack. I don't know any way to keep them in constant proportions. Especially because sixtrack often runs out of tasks to process. Maybe Boinc makes corrections to the proportions of subprojects a bit same way it handles different projects? |
Send message Joined: 27 Sep 08 Posts: 798 Credit: 644,719,384 RAC: 234,550 |
Yes, six track is impossible to plan for. If I limit the number of tasks for CMS then the computer doesn't pick up the slack with the other projects, just leaves the other cores idle. I have to try some more options |
Send message Joined: 15 Nov 14 Posts: 602 Credit: 24,371,321 RAC: 0 |
I sometimes think I would like that too, but then realize there is no point to it. The project scientists/administrators have a better idea of what their priorities are than I do (or they should have). What difference does it really make to me? One detector is as good as another insofar as I know. That is the way it is on WCG also; you pick the projects, and they set the priorities. |
Send message Joined: 15 Jun 08 Posts: 2386 Credit: 222,920,315 RAC: 138,052 |
... Anyone know how to tweek these balances? The answer is very simple. Neither BOINC client nor BOINC server provide a mechanism to select a distinct task from the queue. The server sends them out in the same order they were generated and stored in the server's shared memory. Tasks from deselected apps are skipped. If the client requests n seconds of work the server sends out - as many tasks as necessary to reach those n seconds or - less if a quota is set (serverside!). In addition LHC@home runs a couple of servers concurrently for load balancing reasons and nobody knows which of them will serve the next request (decision is done via random DNS name resolution). The best method to balance work from LHC@home is to run multiple BOINC client instances. |
Send message Joined: 14 Jan 10 Posts: 1268 Credit: 8,421,616 RAC: 2,139 |
That is the way it is on WCG also; you pick the projects, and they set the priorities.WCG has very good options since last year where you can set per science 1 up to 64 tasks or unlimited. Project Limits The following settings allow you to set the maximum number of tasks assigned to one of your devices for a project. Please note that use of these settings could cause your device to not always have work to run if one or more of the projects does not have work available at the time your device requests work. Africa Rainfall Project 10 FightAIDS@Home - Phase 2 1 Help Stop TB unlimited Mapping Cancer Markers 28 Microbiome Immunity Project unlimited Smash Childhood Cancer unlimited I have to try some more optionsOptions I see here are - Since you have more hosts, dedicate each computer to one single application using the 4 venues default, home, school and work. - run several instances of BOINC's client on one computer and use the 4 venues for each client. Edit: computezrmle was way faster, I slower and reading and correcting over and over again ;) |
Send message Joined: 15 Nov 14 Posts: 602 Credit: 24,371,321 RAC: 0 |
Options I see here are I have done that myself, for two instances. However, given the unreliability of the work, if one of them is empty, you are out of luck. Since they are on different BOINC instances, they can not share between them. I now use a single BOINC instance, and choose at least two (native ATLAS and CMS), and hope that one of them has work. |
Send message Joined: 14 Jan 10 Posts: 1268 Credit: 8,421,616 RAC: 2,139 |
Since they are on different BOINC instances, they can not share between them.I did realize that, so you could set If no work for selected applications is available, accept work from other applications? to yes or choose a different (not LHC) project as backup project where you have set the resource share to 0 (zero) |
Send message Joined: 14 Jan 10 Posts: 1268 Credit: 8,421,616 RAC: 2,139 |
Since they are on different BOINC instances, they can not share between them.I did realize that, so you could set If no work for selected applications is available, accept work from other applications? to yes or choose a different (not LHC) project as backup project where you have set the resource share to 0 (zero) |
Send message Joined: 27 Sep 08 Posts: 798 Credit: 644,719,384 RAC: 234,550 |
I agree multiple instances is the most reliable way. After thinking about it more, Jim's thoughts are good, the project know what most important, so we should leave it up to them. If there is no CMS work then I do get more theory or ATLAS. |
Send message Joined: 15 Jun 08 Posts: 2386 Credit: 222,920,315 RAC: 138,052 |
Since you are running linux you may slightly/moderately overload your computers and control the resource shares via cgroups to avoid the interactive processes getting sluggish. This ensures a better total operating grade if one of the app queues is empty at the expense of longer runtimes per task if all work buffers are filled. Mine are running at a factor of 2-2.3. |
Send message Joined: 28 Sep 04 Posts: 674 Credit: 43,150,492 RAC: 15,942 |
... Anyone know how to tweek these balances? If tasks are in the 'ready to send' queue in the order they were created, and a computer requests works but does not accept for example Atlas tasks, the next computer that requests work is probably more likely to receive Atlas work than other types (if it is accepting all types). That could explain the fluctuation between different sub-projects. |
Send message Joined: 27 Sep 08 Posts: 798 Credit: 644,719,384 RAC: 234,550 |
Interesting today, when I set CMS to NNT then I never got any more than 2 from ATLAS and Theory. Like others reported it said there was none when it looked like there was some. |
Send message Joined: 15 Nov 14 Posts: 602 Credit: 24,371,321 RAC: 0 |
Interesting today, when I set CMS to NNT then I never got any more than 2 from ATLAS and CMS. Pretty much the same for me. I was running both CMS and native ATLAS, but set CMS to NNT because of the errors. After that, I could not get any more ATLAS, but received the message that "none were available". So they are coupled somehow. |
Send message Joined: 27 Sep 08 Posts: 798 Credit: 644,719,384 RAC: 234,550 |
I found out what was blocking it, I had set Max # CPUs to 1, so that ATLAS would use a single core, I set to unlimited and now it gets more Theory and ATLAS. I'll see if this now blocks CMS tasks. I used the app_config to limit the number of CPU's to 1 locally, I assume now the working set will be for 8 core task so it blocks many tasks if you have limited RAM. |
Send message Joined: 15 Nov 14 Posts: 602 Credit: 24,371,321 RAC: 0 |
Well I have another strange one, with no logical explanation at my end. I just set up a new machine (no app_configs). My default location had CMS and native ATLAS selected (CPU=1), so it downloaded one ATLAS before I changed the location. Then I changed the location to run only SixTrack and native Theory (no CMS or ATLAS). After downloading two Theory, it would not download any more until I allowed ATLAS also. Now I have four native Theory and four native ATLAS (CPU=2), to fill up the 12 cores. So I will do ATLAS too, if that is what it wants (I had to install Singularity to get native ATLAS to run). Maybe if it did what people asked, there would be more users. |
©2024 CERN