Message boards :
Number crunching :
Daily quota
Message board moderation
Author | Message |
---|---|
Send message Joined: 1 Dec 05 Posts: 62 Credit: 11,441,610 RAC: 0 |
I just got this message and was wondering what it means. 9/7/2011 9:49:59 AM | LHC@home 1.0 | Sending scheduler request: Requested by user. 9/7/2011 9:49:59 AM | LHC@home 1.0 | Reporting 2 completed tasks, requesting new tasks for CPU and NVIDIA GPU 9/7/2011 9:50:01 AM | LHC@home 1.0 | Scheduler request completed: got 0 new tasks 9/7/2011 9:50:01 AM | LHC@home 1.0 | No work sent 9/7/2011 9:50:01 AM | LHC@home 1.0 | (reached daily quota of 60 tasks) Can I only do 60 tasks a day? |
Send message Joined: 2 Sep 04 Posts: 209 Credit: 1,482,496 RAC: 0 |
It means just that. Look under your account / computers for Maximum daily WU quota per CPU 10/day If a computer produces lots of errors the quotta drops so it does not waste so many tasks others can complete correctly. When it does them correct it can be raised. The project can set an upper limit, and to be fair to all those attached often does this so everybody can get thier fair share of work. Often too during startup until problems are all fixed this is set low, then when things are fine and they have abundant work they may raise it. |
Send message Joined: 1 Dec 05 Posts: 62 Credit: 11,441,610 RAC: 0 |
Thanks Keith, I didn't know that! I did check to see if I was sending a lot of errors or invalids and saw not a one. If I open up 2 more cores will it let me do 20 more? This is the first 8 core processor I've ever had and am a bit of a knothead! Thanks again, Pick |
Send message Joined: 2 Sep 04 Posts: 209 Credit: 1,482,496 RAC: 0 |
Thanks Keith, I didn't know that! I did check to see if I was sending a lot of errors or invalids and saw not a one. Sorry for late reply, In theory yes. The limit is ## per day X number of active to boinc cpu cores. (not toal cpu cores in the machine) However the limit has been changed a couple of times by the admin during testing here. I'm not sure the current limit, i have various numbers across my machines of 2, 4, 10 and 20. The limit also changes by the fact that if you constantly turn in bad work, it is reduced, when you turn in good work it is raised upto the limit set by the project, i'm not sure thou the increments it uses. There is not a large amount of work at the moment, and it seems too many computers are attached, so it vanishes quickly. I think if everyone is patient for the next week or two, all the problems will get sorted out, there is a planned server change (see the url thread), so after that is tested maybe we will get more work more frequently [cross fingers]. |
Send message Joined: 16 May 11 Posts: 79 Credit: 111,419 RAC: 0 |
We have crunched quite a few jobs now. Thank you very much for your effort! The rejection factor is small, but it may be worthwhile to understand the numerical cases behind it. We may need to take a pause in submitting new studies until this is understood. It may take us 3 or 4 days. Another change - we should move machines for lhcathomeclassic on wednesday or thursday and also fix some small data base inconsistencies. If you have noticed problems, this is time to report it. I know about some team-leader mismatches, some host-user mismatches, the missing statistics on the lhcathomeclassic/sixtrack page. Anything else? skype id: igor-zacharov |
Send message Joined: 2 Sep 04 Posts: 209 Credit: 1,482,496 RAC: 0 |
We have crunched quite a few jobs now. Thank you very much for your effort! I review the threads, all I can see is the front page has [Links....] and copyright that need to be repalced, and there is no server_status page which a lot of users have requested, here it says you need admin rights to view it, every other project has one freely viewable. |
Send message Joined: 2 Sep 04 Posts: 209 Credit: 1,482,496 RAC: 0 |
Something else to fix while you're fixing things: David Anderson wrote: There are (at least) two issues with 6.13.3: David Anderson, boinc_projects email list wrote:
|
Send message Joined: 27 Sep 08 Posts: 817 Credit: 679,828,882 RAC: 218,983 |
This is a great feature!! Before the fast maching would get all the work and everyone else would miss out. Now everyone get a fair'er chance. |
Send message Joined: 4 May 07 Posts: 250 Credit: 826,541 RAC: 0 |
You might want to consider making the "Daily Quota" self adjusting based on the number of Tasks waiting on the Server and the number of requests for Tasks that have occurred (recently). I noticed that there are now 15,000+ Tasks waiting today and this number has crept upwards over the past few days. I crunch Einstein, SETI, LHC 1.0, and Test4Theory (LHC 2.0 using an Oracle Virtual Box). I'd like to get a few more Tasks for LHC 1.0 I suspect that those Project members that had the big crunching farms have all gone to T4T or are being too limited by the quota to crunch waiting Tasks. It looks like the quota is set to three Tasks per day. Tom |
Send message Joined: 25 Apr 08 Posts: 11 Credit: 3,529,049 RAC: 0 |
... Not likely. T4T currently limits tasks to one per computer (not even one per cpu). Not much there for big crunching farms... --Bill |
Send message Joined: 16 May 11 Posts: 79 Credit: 111,419 RAC: 0 |
On popular request I have increased the quotas on LHC@HOME 1.0 in config.xml: < --- > 74,75c74,75 < < --- > > Let's see what happens. Any other suggestion of what can be improved? skype id: igor-zacharov |
Send message Joined: 1 Dec 05 Posts: 62 Credit: 11,441,610 RAC: 0 |
Wow, massive change in the size of each WU! Will have to leave this unit run 24/7 to get these done before deadline. I will make changes to limit amount received. Still good to have things rolling in. Well not as panicky as I thought. Just checked the run times against progress. At first the time was constantly going up and progress was going nowhere. Seems to be turning around now. These are new and I have no experience with them and I can't find any other participants run times to compare. Oh well I'll do my best. Pick |
Send message Joined: 4 May 07 Posts: 250 Credit: 826,541 RAC: 0 |
On popular request I have increased the quotas on LHC@HOME 1.0 in config.xml: Bless you! :) |
Send message Joined: 1 Dec 05 Posts: 62 Credit: 11,441,610 RAC: 0 |
WoW, it was running HSW for a while till I hit the Quota limit again. I ran 320 in about 10 Min. Most for .02 credits. Now I have to wait for ever to run some more. Hope I don't miss too many that have some real credit!!! Pick |
Send message Joined: 9 Oct 10 Posts: 77 Credit: 3,671,357 RAC: 0 |
WoW, it was running HSW for a while till I hit the Quota limit again. I ran 320 in about 10 Min. Most for .02 credits. Now I have to wait for ever to run some more. Hope I don't miss too many that have some real credit!!! Pick I think something is wrong in the current batch : the WUs are estimated to run 8 hours, but terminate only within seconds ... |
Send message Joined: 3 Oct 06 Posts: 101 Credit: 8,994,586 RAC: 0 |
I think something is wrong in the current batch the WUs are estimated to run 8 hours, but terminate only within seconds ... If current input parameters of the simulation are giving unstable trajectories of the particle beam (the beam will hit something inside of LHC), there is no reason to continue this simulation - the task is finished. So this batch may be Ok with high probability. |
Send message Joined: 4 Jul 07 Posts: 17 Credit: 35,310 RAC: 0 |
Yes I know about that ("beam hit the wall"), but it still sucks, when all yours daily quota is lost for 5s WUs... |
Send message Joined: 3 Oct 06 Posts: 101 Credit: 8,994,586 RAC: 0 |
Nawiedzony, You are right, but this is our job here - our hosts must do simulations until correct parameters for shooting will be found. :-) Possible solution… Let's ask the project team to increase daily quota! We have already very important and useful for the project limitation - limited amount of tasks in progress. By this limitation there is no possibility for some undecided participants to download hundreds of tasks and, for example, uninstall BOINC after that. IMHO, daily quota is an extra-limitation now and may be set to formal (big) value. |
Send message Joined: 12 Jul 11 Posts: 857 Credit: 1,619,050 RAC: 0 |
terribly sorry; MEA CULPA. Put it down to old age, stress, overwork, or whatever. I put in a whole batch of rubbish jobs. I should have tested them first. I suspect a real physics problem with a bad setup of TUNES perhaps. promise to do better, and there is more genuine 10**6 turn jobs on their way. Sorry abot the 60 limit. On the positive this is an excellent stress test. I used to measure the overhead of batch systems but seeing how long it took to do absolutely nothing. We will also have to have a timeout soon to check the results with ifort as against our old lahey lf95. Apart from testing the system this is a major objective of the current runs. Eric. |
Send message Joined: 12 Jul 11 Posts: 857 Credit: 1,619,050 RAC: 0 |
Well maybe I was too quick there. I have just run a couple of cases and they are OK. Except that the particles don't even complete one turn! I'll have to consult colleagues on this. I should also wait for all results. It is sad that if you get a "good" million turn job you'll get a nice credit, but if you get a bad one you are immediately available to get another one and can soon use up your quota of 60. Have to think about this one. Eric. |
©2024 CERN