Message boards :
Number crunching :
BOINC manager skipping LHC project
Message board moderation
Author | Message |
---|---|
Send message Joined: 28 Dec 08 Posts: 318 Credit: 4,231,210 RAC: 4,579 |
So BOINC Manager (BM for short) is skipping over LHC and pushing more WCG tasks. WCG is 154K point below LHC in total work. BUT..LHC is lower in RAC than WCG. But at the same time Rosetta which was my first project is higher in points in both categories than LHC, but BM gives more tasks to Rosetta and skips LHC. I raised the resource share to 125 and all the other projects are at 100. I reset the project as well. No change to the behavior of BM and LHC. What is causing this? |
Send message Joined: 14 Jan 10 Posts: 1279 Credit: 8,484,048 RAC: 1,651 |
The priority of projects to run (when no tasks running high priority) is done in order of REC (recent estimated credit). Tasks of the project with the lowest REC are starting first. Total credit, RAC or resource share don't play a role which task should start first. The REC is raised when a project has running tasks. The REC is stored and maintained in the client_state.xml. When you are familiar with the cmd-prompt, start a command prompt, navigate to your BOINC data directory and give 2 commands: find client_state_prev.xml "<master_url>" <RETURN> and find client_state_prev.xml "<rec>" <RETURN> |
Send message Joined: 28 Sep 04 Posts: 675 Credit: 43,609,995 RAC: 15,775 |
Different projects give different amounts of credit for work done. Equal resource share does not mean equal credit or RAC for projects. Here's some comparison between different projects: https://boincstats.com/en/stats/-1/cpcs |
Send message Joined: 28 Dec 08 Posts: 318 Credit: 4,231,210 RAC: 4,579 |
So where in Client_State do I find what the REC total is per project? I see <rec> and <rectime> labels The priority of projects to run (when no tasks running high priority) is done in order of REC (recent estimated credit). |
Send message Joined: 14 Jan 10 Posts: 1279 Credit: 8,484,048 RAC: 1,651 |
In each project section in client_state you'll find <rec>xxxx.xxxxxx</rec> The project with the lowest value will start first, when that project has tasks 'Ready to start' and there comes a thread free due to a task finished, suspends/waiting to run or increase of available cpu's etc. |
Send message Joined: 28 Dec 08 Posts: 318 Credit: 4,231,210 RAC: 4,579 |
Well in that case there is a problem. LHC has only 99.xxxxx time Einstein and all the rest like GPU have 12-14,000 in time. Does a reset solve this issue or what does? In each project section in client_state you'll find |
Send message Joined: 14 Jan 10 Posts: 1279 Credit: 8,484,048 RAC: 1,651 |
GPU has its own rules, because that project does not use much cpu and will always ask work when the GPU is idle or does not have enough work in cache. Not sure what you mean with 'time'. Could you extract the lines with <master_url> and the lines with <rec> and post it here. |
Send message Joined: 28 Dec 08 Posts: 318 Credit: 4,231,210 RAC: 4,579 |
Time aka the values listed in the <rec> section <master_url>https://lhcathome.cern.ch/lhcathome/</master_url> <project_name>LHC@home</project_name> <rec>99.594310</rec> Now look at my first project that I have been running forever that is CPU only: <rec>946.911213</rec> and WCG (world community grid) <rec>944.458780</rec> Milkyway <rec>13309.114918</rec> So what's going on? |
Send message Joined: 14 Jan 10 Posts: 1279 Credit: 8,484,048 RAC: 1,651 |
You don't have LHC-tasks in progress, so the problem is: why no tasks from LHC are requested. When you update LHC@home manually, is it requesting CPU-tasks? How many tasks have you configured in LHC-project preferences? |
Send message Joined: 28 Dec 08 Posts: 318 Credit: 4,231,210 RAC: 4,579 |
Think I found the problem. In the past I had disabled six track work because I wanted to test out virtual box again after some problems there. After the other projects ran out of work for virtual box, I was pretty sure I enabled six track again, but I guess I didn't. Now six track and six track test are enabled along with all the other projects and I will increase my storage buffer by a half day and see if six track loads. I currently store just 1 day of work and the job queue is full right now for just 1 day. I got a bunch of six track stuff now and I see that the other projects are out of work: 7/27/2017 10:45:17 PM | LHC@home | No tasks are available for CMS Simulation 7/27/2017 10:45:17 PM | LHC@home | No tasks are available for LHCb Simulation 7/27/2017 10:45:17 PM | LHC@home | No tasks are available for Theory Simulation 7/27/2017 10:45:17 PM | LHC@home | No tasks are available for ATLAS Simulation 7/27/2017 10:45:17 PM | LHC@home | No tasks are available for ALICE Simulation 7/27/2017 10:45:17 PM | LHC@home | No tasks are available for Benchmark Application Even though it appears on the server as there is work to be sent. |
Send message Joined: 30 Aug 14 Posts: 145 Credit: 10,847,070 RAC: 0 |
I have a similar thing going on. One of my machines (ID: 10486800) is not getting any Atlas-Tasks. Boinc message is "No work available for ATLAS-Simulation" which obviously is not correct. I didn't try other VB tasks, but Sixtrack works well. Why mine when you can research? - GRIDCOIN - Real cryptocurrency without wasting hashes! https://gridcoin.us |
Send message Joined: 30 Aug 14 Posts: 145 Credit: 10,847,070 RAC: 0 |
Nonsense...it's Computer ID: 10491710, which is not getting Atlas tasks... |
Send message Joined: 18 Dec 15 Posts: 1688 Credit: 103,528,186 RAC: 119,139 |
Nonsense...it's Computer ID: 10491710, which is not getting Atlas tasks... this is a problem which some people have since 3 days ago (including myself, with all my PCs) - it frist started out with all tasks failing after some 10-15 minutes (probably due to a connectivity problem with the CERN server), lateron no tasks could be downloaded any more. For more details, see here: https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4378 |
Send message Joined: 30 Aug 14 Posts: 145 Credit: 10,847,070 RAC: 0 |
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4378 Thanks for that, i followed this thread already. I have the feeling that this situation is somehow related to Sixtrack. Whenever Sixtrack has thousands of workunits in the queue, Atlas seem to get "hickups". I recall similar problems last time Sixtrack had so much WU's to be distributed a few weeks ago. Could this be associated? Why mine when you can research? - GRIDCOIN - Real cryptocurrency without wasting hashes! https://gridcoin.us |
©2024 CERN