Message boards :
Number crunching :
New Work so get it while you can....!
Message board moderation
Author | Message |
---|---|
![]() ![]() Send message Joined: 30 Sep 04 Posts: 112 Credit: 104,059 RAC: 0 |
Up, 6895 workunits to crunch 39022 workunits in progress 54 concurrent connections As of 16:10 UTC today 10-12-06 X-Mass came early....All my boxen are well fed for the moment.. This was a nice surprise to wake up and see this morning for sure!! ![]() ![]() |
![]() ![]() Send message Joined: 13 Jul 05 Posts: 143 Credit: 263,300 RAC: 0 |
I've got to agree with you -- early Christmas! Plus, it looks like there's a spread of tasks too -- from 2hr shorties to 8 hr big ones -- SWEET!!! Hope everyone remembered to keep some machines on the project, but I suspect there will be some wailing and gnashing of teeth by those who didn't. AESOP was right --> You snooze, you lose! If I've lived this long, I've gotta be that old |
![]() ![]() Send message Joined: 30 Sep 04 Posts: 112 Credit: 104,059 RAC: 0 |
Now if only they could get the XML Stat feed working again so ppls counters would properly update....THAT would be an awesome Holiday Present!!!!! Ahhh but I digress as that's been discussed to death in other threads and it ain't gonna happen anytime soon.... {sigh} |
![]() ![]() Send message Joined: 13 Jul 05 Posts: 143 Credit: 263,300 RAC: 0 |
OK -- It is now 1825UTC, and this packet of WUs is committed to the crunchers. Usual response time for a reply is a week, so I expect that it will be at least a week before we see any residual work from WUs that error out or otherwise get returned "incomplete". Hope everybody got their fill .... If I've lived this long, I've gotta be that old |
![]() Send message Joined: 21 Jan 06 Posts: 46 Credit: 174,756 RAC: 0 |
|
Send message Joined: 30 May 06 Posts: 40 Credit: 218,154 RAC: 1 ![]() ![]() |
I am quite lucky so far, I got lots of WUs last two batches :-) |
![]() ![]() Send message Joined: 18 Sep 04 Posts: 38 Credit: 173,867 RAC: 0 |
I got 3 wu 1 each on 3 machines :) ![]() |
![]() ![]() Send message Joined: 13 Jul 05 Posts: 143 Credit: 263,300 RAC: 0 |
Morgan -- You're not the only one who seemed to get very few WUs out of the last batch - yet others got plenty. I wonder why? Are the settings on the various machines to blame, or can some of our fellow crunchers "encourage" the system to give out larger/more blocks of data? Any thoughts/comments out there? If I've lived this long, I've gotta be that old |
![]() Send message Joined: 27 Sep 04 Posts: 282 Credit: 1,415,417 RAC: 0 |
Morgan -- Propably enough WU's from other projects in the queue... Or ask River~~ ;-) Cheers, sysfried BTW: due to heavy PC re-construction.... I missed the batch of new work... *booohoooo* |
Send message Joined: 22 Sep 05 Posts: 21 Credit: 6,302,511 RAC: 0 ![]() ![]() |
I cheated. My 3 machines at work picked up 7. For my home machine, I noticed it was doing a LHC packet so I suspended it and let it download another one. I did this 10 times for a total of 11 packets on my home machine. 6 of these packets were less than 30 seconds each. Doing this had an adverse effect on my rosetta and world grid processing for this machine, but ... William ![]() |
![]() Send message Joined: 21 Jan 06 Posts: 46 Credit: 174,756 RAC: 0 |
A response to some with fewer WU's and some with plenty: Another way of going about getting more, something i no longer do and wouldn't encourage, you can suspend all other projects, turn the time between contacts up (like from .4 days to 2 or 3 days), contact LHC servers again, and it will fill up with several days worth of work. I realized how bad an idea this was when work was starting to become scarce a while back; others were sad about not getting work and it became all too apparent how unfair that was. I strongly discourage you from to 'exploiting' the servers this way. The incurable optimist, Mike ![]() blog pictures |
![]() Send message Joined: 14 Jul 05 Posts: 275 Credit: 49,291 RAC: 0 ![]() |
and it will fill up with several days worth of work. And then the scientists have to wait five times as long to get things done. And, the workunits may even reach deadline. And, more people with few or no work. I hope we get the new admins soon, and they add a cache limit to the scheduler - along with all the other things that have to be done (host duplication, stats...) |
![]() Send message Joined: 21 Jan 06 Posts: 46 Credit: 174,756 RAC: 0 |
I hope we get the new admins soon, and they add a cache limit to the scheduler - along with all the other things that have to be done (host duplication, stats...) Amen to that! I probably shouldn't have mentioned this, but you can easily tell who the major offenders of this 'exploit' are. (repeated contacts and downloading units soon after others/initial contact.) Mike ![]() blog pictures |
![]() ![]() Send message Joined: 13 Jul 05 Posts: 143 Credit: 263,300 RAC: 0 |
Welll, lessee here -- First WU received from LHC --> 12/10/06, 10:01:06 UTC Last WU returned back to LHC --> 12/12/06, 04:16:04 UTC That's a spread of roughly 42 hours (less than 2 days) to do the work. That means that all work assigned to my machine(s) was accomplished and returned in less than 30% of the allotted time. True, I didn't get as many WUs as others, but I'm willing to let the program work as it's designed -- nuff said If I've lived this long, I've gotta be that old |
Send message Joined: 22 Sep 05 Posts: 21 Credit: 6,302,511 RAC: 0 ![]() ![]() |
For my home machine, which is the slowest I run on is a P4 2.8 - no hyperthreading, and a 533fsb bus (old but solid motherboard). first one down : 10 Dec 2006 11:18:31 UTC first one back up: 11 Dec 2006 2:04:40 UTC last one down : 10 Dec 2006 17:42:39 UTC - 17:39 is when I intervened. last one back up : 11 Dec 2006 11:21:17 UTC William ![]() |
![]() Send message Joined: 21 Jan 06 Posts: 46 Credit: 174,756 RAC: 0 |
Edit: I essentially deleted a very long post about spotting the problem of scheduler exploitation, why it's an important problem and views on others machines. It really wasn't needed so i deleted it. I was mostly pointing out problems with hosts and showing examples on spotting and how it's done. (essentially points that i would be overstating) I was preparing for a final tomorrow in 'The History of Southeast Asia' and the tedium got to me, so posting was a nice break from that. the deleter of pointless posts and the incurably optimistic, Mike ![]() blog pictures |
![]() ![]() Send message Joined: 18 Sep 04 Posts: 38 Credit: 173,867 RAC: 0 |
I'm happy with 1 wu my machines are set that way on purpose ;) I'd prefer everyone get 1 wu than only some of get some. ![]() |
Send message Joined: 7 Oct 06 Posts: 114 Credit: 23,192 RAC: 0 |
:-) What i do is at night before going to bed, i mark all the other projects "No further work" and go to sleep leaving the machine to knock at the project. The work cache is also kept a little less then normal :-)Over the last two weeks i keep getting a few WU"s, not much but enough. Regards Masud. ![]() |
![]() ![]() Send message Joined: 13 Jul 05 Posts: 143 Credit: 263,300 RAC: 0 |
From what I can see, it looks like this batch of work became available about 10:00 UTC on the 19th, and finished distribution about 00:00 UTC on the 20th -- that's a spread of fourteen hours for the program to request new work from LHC. Assuming that the programs weren't set to the "No New Work" or "Suspended" settings, that seems like plenty of time to get a WU or few. If I've lived this long, I've gotta be that old |
©2023 CERN