Message boards :
Number crunching :
Daily Quota?
Message board moderation
Previous · 1 · 2 · 3 · Next
Author | Message |
---|---|
Send message Joined: 17 Sep 04 Posts: 190 Credit: 649,637 RAC: 0 |
>Anyway, the quota is now at 30 per day per host. But we might run out of work tonight... More work coming tomorrow, I hope. Thats good to know but the night just begins and: Up, out of work thats the other side of the coin.. Can we have a try with less than 30? that would be still not enough for a 24 hours full house, but so more people can reach work, I guess. A fast Pentium 3.2 is doing in theory 45-50 average LHC WUs a day, shorties and longies not listed. Due most of the hosts are attached to multiple projects, a deeper limitation could offer a golden way. Just some 0.02$ friendly |
Send message Joined: 3 Sep 04 Posts: 212 Credit: 4,545 RAC: 0 |
Aargh, this fast??? Let's see how much work we will get tomorrow, then we decide if we have to decrease the quota again. One more reason for low quotas: from our point of view it's better if many users get a few workunits instead of some getting many, because then we get the results back faster on average. Markku Degerholm LHC@home admin |
Send message Joined: 18 Sep 04 Posts: 143 Credit: 27,645 RAC: 0 |
> Aargh, this fast??? > Their last quotas haven't reset yet, Markku. The server has to reset the quota for each host first before you see this change. Those already on their quota of 10 for today won't see a new quota until they are passed within 24 hours. or whatever time the servers says the quota is over. Or so I learned it goes. Jord BOINC FAQ Service |
Send message Joined: 3 Sep 04 Posts: 212 Credit: 4,545 RAC: 0 |
> > Aargh, this fast??? > > > Their last quotas haven't reset yet, Markku. > The server has to reset the quota for each host first before you see this > change. Those already on their quota of 10 for today won't see a new quota > until they are passed within 24 hours. > > Or so I learned it goes. Actually, there is a per-host variable "nresults_today" in the database, which is reset midnight server time. If "nresults_today" exceeds our quota parameter, then the user won't get any new work (until nresults_today is reset). But if we increase the quota, then we change will take effect immediately and user gets more work. Markku Degerholm LHC@home admin |
Send message Joined: 17 Sep 04 Posts: 14 Credit: 12,301 RAC: 0 |
Hmmm. The project just started yesterday, with a daily quota of 10 workunits. Then today the daily quota is set to 30, and in a couple of hours you are out of work. Suprise. hehe <img border="0" src="http://boinc.mundayweb.com/one/stats.php?userID=530" /> |
Send message Joined: 18 Sep 04 Posts: 143 Credit: 27,645 RAC: 0 |
That still depends on when the user tried their Update, as I have found it will usually ignore the server and just say hi to to scheduler. Then XXX minutes later (when apparently the client needs to call homebase), the client will call in and get and preference changes and units in. XXX = anywhere between 5 and 1440 minutes later. Jord BOINC FAQ Service |
Send message Joined: 23 Oct 04 Posts: 358 Credit: 1,439,205 RAC: 0 |
The quota of 10: this are "long" WU's , formerly known as 'tunescana'. They take about 8 - 12 hours to crunch (depends on machine). (The first 4 WU's sent, was to test or calibrate the host (so they toke only about 2 - 10 min.(even 0.0) to crucnch) The quota of 30 (today): this are "normal" WU's which take 40 min. - 70 min. to crunch. This are the expiriences I did just now. (Maybe, I'm completely wrong) To the team, thanks to handle this disturbing (or stormy) restart! greetz from Switzerland littleBouncer |
Send message Joined: 1 Sep 04 Posts: 2 Credit: 578,977 RAC: 0 |
I am as anxious as the next man to get LHC wu's for my machines as I never played with it last time it was up or during the recent alpha. But give them a break the project has only been live again for hours and it makes perfect sense to wont to load test as much as possible. Most if not all of my hosts are out of LHC work due to the initial low daily quota and now the project being out of work , but its not the end of the world. Surely the data on load and unit distribution is more important to the long term stability of the project than initialy providing all us desperate crunchers with enough work to keep us happy. In my view it is both fairer and sensible to spread the "precious" work over numerous hosts rather than giving lots to some and none to most. Shady <img src='http://www.boincsynergy.com/images/stats/comb-1527.jpg'> |
Send message Joined: 29 Sep 04 Posts: 196 Credit: 207,040 RAC: 0 |
> One more reason for low quotas: from our point of view it's better if many > users get a few workunits instead of some getting many, because then we get > the results back faster on average. > > Markku Degerholm > LHC@home admin Markku- Your comment brings up an idea I had a few months ago while LHC was down. LHC interested me for various reasons last year and it was (until 2 weeks ago) the only project I had signed up for via BOINC. My main pc can crunch LHC 100Kcycle WUs in about 37 minutes or nearly 39WUs/day. In regeard to fastest turnaround time possible, would it benefit LHC@Home if the scheduler had a way to predict turnaround time per host should "important" or "time-sensitive" information need be processed immediately? Travis |
Send message Joined: 27 Sep 04 Posts: 34 Credit: 199,100 RAC: 0 |
> > One more reason for low quotas: from our point of view it's better if > many > > users get a few workunits instead of some getting many, because then we > get > > the results back faster on average. > > > > Markku Degerholm > > LHC@home admin > > Markku- > Your comment brings up an idea I had a few months ago while LHC was down. LHC > interested me for various reasons last year and it was (until 2 weeks ago) the > only project I had signed up for via BOINC. My main pc can crunch LHC > 100Kcycle WUs in about 37 minutes or nearly 39WUs/day. In regeard to fastest > turnaround time possible, would it benefit LHC@Home if the scheduler had a way > to predict turnaround time per host should "important" or "time-sensitive" > information need be processed immediately? > > Travis The latest server code already predicts turnaround time (see in your hosts the "Average turnaround time"). The people developing BOINC is discussing what to do with it. At Einstein@home they were thinking of doing something with it (because they only have a 7 day deadline), but David Anderson (BOINC "Big Boss") convinced them not to do anything for now. Professor Desty Nova Researching Karma the Hard Way |
Send message Joined: 17 Sep 04 Posts: 27 Credit: 8,757 RAC: 0 |
Edited for error: Sorry missed prior post of Quota's Happy Crunching! Regards, Rocky <img src="http://boincwapstats.sourceforge.net/summary.php?name=ML%20Cudd&team=US%20Navy&seti=123863&einstein=4173&lhc=1091&climate=2545&predictor=7916&style=6&ctx=yellow"> WinXP SP2 Boinc - 4.44 Have A Great Day! |
Send message Joined: 1 Sep 04 Posts: 157 Credit: 82,604 RAC: 0 |
> Anyway, the quota is now at 30 per day per host. But we might run out of work > tonight... More work coming tomorrow, I hope. Is there any possibilty to link the daily quota to the type of WU? I mean by this, if a WU is one of 100.000 turns or one of 1.000.000 turns, this makes a difference of 10 times. Can the quota be adapted regarding the number of turns of the WU's? Best greetings from Belgium Thierry |
Send message Joined: 3 Sep 04 Posts: 212 Credit: 4,545 RAC: 0 |
> > Is there any possibilty to link the daily quota to the type of WU? > If there is, I'm not aware of it. Markku Degerholm LHC@home admin |
Send message Joined: 2 Sep 04 Posts: 545 Credit: 148,912 RAC: 0 |
> If there is, I'm not aware of it. Another minor oversight ... :) Though supposed to be set up to be multi-application and multi-project, oops, one size fits all Work Unit! :) I suppose I should not be so easily amused ... |
Send message Joined: 2 Sep 04 Posts: 121 Credit: 592,214 RAC: 0 |
Hm, I operate more than 38GHz of Computing power, but I simply crunch whatever the server gives me to crunch. If I ever hit the daily quota with any of my machines, I wouldn't even know it (5 active Projects ;) ) Whatever LHC does not take, the other Projects get to share. Fair game. Scientific Network : 45000 MHz - 77824 MB - 1970 GB |
Send message Joined: 17 Sep 04 Posts: 190 Credit: 649,637 RAC: 0 |
> Hm, I operate more than 38GHz of Computing power, but I simply crunch whatever > the server gives me to crunch. > > If I ever hit the daily quota with any of my machines, I wouldn't even know it > (5 active Projects ;) ) > > Whatever LHC does not take, the other Projects get to share. Fair game. > Me too, not demanding, just try to return propper work interesting reflection you did, 38 GHz first view it sounds great (it IS great) I say GHz are not GHz, when I take a look what the p4 1.8 GHz 512 MB ram (running at 133 MHz) is doing versus the amd1200, both no oc, the AMD is beating him in benchmark and elapsed time.. And we can find much more examples. Whats about HT based fast intel CPUs taking 2 active slots with the same GHz number. May I compare it with a *huge* V8 chevi Motor or something like that? as long the power cant reach the wires without loss/resistance, even a 400 horse power (sounds good on paper) will produce much overflow in versus mesured effective horses. Or the man having an amd 500 AND havinfg work IS fast than the man with a 3.x GHz beast WITHOUT work. There are several and good reasons for having a quota, in case of a client having a bad moment a a hard disk crashes, so not to much work is lost. Personally I vote for the quotas, even the design is not reflecting the effective demanded flops. But basically I like to see your 38 GHz number, because the number itself is impressive, not much of us will have this (theoretical) power. @paul (D. Buck) more above you mentioned something about the seti classic peolple/farm not migrated to boinc now. Just a guess, a 0.02 Euro, I think a great part of that hardware would not fit the minimum requirements of hardware (perhaps software too) so they will probaly never find the way to boinc based stuff. It's sad to write, but what do the "classic" people do with a wall of, but only p1 generation of CPUs? (60/90/120/133/2xx MHz) A modern digital SLR of today will have more internal RAM and higher processing speed than the old snakes But those GHz's thos are not for crunching only? with that accumulated you should perform mutch better, just my friendly opinion. They do not run 24 H for the project(s) only I guess |
Send message Joined: 2 Sep 04 Posts: 545 Credit: 148,912 RAC: 0 |
> @paul (D. Buck) > more above you mentioned something about the seti classic peolple/farm not > migrated to boinc now. > Just a guess, a 0.02 Euro, > I think a great part of that hardware would not fit the minimum requirements > of hardware (perhaps software too) so they will probaly never find the way to > boinc based stuff. > > It's sad to write, but what do the "classic" people do with a wall of, > but only p1 generation of CPUs? (60/90/120/133/2xx MHz) Well, I will say that SETI@Home Powered by BOINC runs about the same as the old Classic, just a little slower as the Science Application is doing more work now. So, those that do have a Farm of older machines would still be able to use them to do SETI@Home and possibly Predictor@Home as the both have work that is pretty fast ... However, I will also say that maybe it is time they upgraded the Farm ... :) I know that I do a periodic "purge" of machines because of this very problem. Because Nancy insisted I get my computers out of the Guest room I had to drop off the lowest 4 machines ... Now that I can put a few in the Library (the 4th bedroom supposed to be ...) I will be getting a few additional boxes ... but I have wandered astray of your thought ... :) Seriously though, even if SETI@Home was doing exactly what the old program was, it would still be time to upgrade as Astropulse and the other telescopes will be generating work and the rumor is that the processing time on each of those will be longer than SETI@Home's current processing time. We do have issues with work being assigned to machines that can "handle" the work loads, but there are plans to change the schedulers to make them smarter about assigning work. |
Send message Joined: 2 Sep 04 Posts: 165 Credit: 146,925 RAC: 0 |
> > @paul (D. Buck) > > more above you mentioned something about the seti classic peolple/farm > not > > migrated to boinc now. > > Just a guess, a 0.02 Euro, > > I think a great part of that hardware would not fit the minimum > requirements > > of hardware (perhaps software too) so they will probaly never find the > way to > > boinc based stuff. > > > > It's sad to write, but what do the "classic" people do with a wall of, > > but only p1 generation of CPUs? (60/90/120/133/2xx MHz) > > Well, I will say that SETI@Home Powered by BOINC runs about the > same as the old Classic, just a little slower as the Science Application is > doing more work now. So, those that do have a Farm of older machines would > still be able to use them to do SETI@Home and possibly Predictor@Home as the > both have work that is pretty fast ... > > However, I will also say that maybe it is time they upgraded the Farm ... :) > > I know that I do a periodic "purge" of machines because of this very problem. > Because Nancy insisted I get my computers out of the Guest room I had to drop > off the lowest 4 machines ... > > Now that I can put a few in the Library (the 4th bedroom supposed to be ...) I > will be getting a few additional boxes ... but I have wandered astray of your > thought ... :) > > Seriously though, even if SETI@Home was doing exactly what the old program > was, it would still be time to upgrade as Astropulse and the other telescopes > will be generating work and the rumor is that the processing time on each of > those will be longer than SETI@Home's current processing time. > > We do have issues with work being assigned to machines that can > "handle" the work loads, but there are plans to change the schedulers to make > them smarter about assigning work. > 2 of my 3 p200 machines are doing work on all projects (except CPDN). The other only has enough RAM for LHC and ALPHA. My P90 is in the same situation, it only has 64MB RAM which is just slightly too little for most projects. BOINC WIKI |
Send message Joined: 2 Sep 04 Posts: 16 Credit: 65,275 RAC: 0 |
@Paul: > Though supposed to be set up to be multi-application and multi-project, oops, > one size fits all Work Unit! :) > > I suppose I should not be so easily amused ... If you're easily amused, then I suppose that I am no better. As a former-programmer-turning-theologian I am still very much interested in the nit picky details of code design. And in this case I think you're right on. A multi-app, multi-project system with a one size fits all approach is a definite design oversight. Perhaps they'll pick up on that throughout the development cycle... Regards, Clint www.clintcollins.org - spouting off at the speed of site |
Send message Joined: 2 Sep 04 Posts: 545 Credit: 148,912 RAC: 0 |
> If you're easily amused, then I suppose that I am no better. As a > former-programmer-turning-theologian I am still very much interested in the > nit picky details of code design. And in this case I think you're right on. A > multi-app, multi-project system with a one size fits all approach is a > definite design oversight. Perhaps they'll pick up on that throughout the > development cycle... > > Regards, > Clint Me too ... The willingness of some of the development groups to accept external input is, um, unenthusiastic, to say the least. Which is a shame ... one of the whole points to being "open" source is to get critical input from the outside. |
©2024 CERN