Message boards : Number crunching : New Work so get it while you can....!
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile Logan5@SETI.USA
Avatar

Send message
Joined: 30 Sep 04
Posts: 112
Credit: 104,059
RAC: 0
Message 15778 - Posted: 10 Dec 2006, 16:14:24 UTC
Last modified: 10 Dec 2006, 16:25:19 UTC

Up, 6895 workunits to crunch
39022 workunits in progress
54 concurrent connections

As of 16:10 UTC today 10-12-06

X-Mass came early....All my boxen are well fed for the moment..

This was a nice surprise to wake up and see this morning for sure!!

ID: 15778 · Report as offensive     Reply Quote
Profile Ocean Archer
Avatar

Send message
Joined: 13 Jul 05
Posts: 143
Credit: 263,300
RAC: 0
Message 15779 - Posted: 10 Dec 2006, 16:43:30 UTC
Last modified: 10 Dec 2006, 16:45:44 UTC

I've got to agree with you -- early Christmas!

Plus, it looks like there's a spread of tasks too -- from 2hr shorties to 8 hr big ones -- SWEET!!!

Hope everyone remembered to keep some machines on the project, but I suspect there will be some wailing and gnashing of teeth by those who didn't.

AESOP was right --> You snooze, you lose!





If I've lived this long, I've gotta be that old
ID: 15779 · Report as offensive     Reply Quote
Profile Logan5@SETI.USA
Avatar

Send message
Joined: 30 Sep 04
Posts: 112
Credit: 104,059
RAC: 0
Message 15780 - Posted: 10 Dec 2006, 17:40:48 UTC
Last modified: 10 Dec 2006, 17:41:15 UTC

Now if only they could get the XML Stat feed working again so ppls counters would properly update....THAT would be an awesome Holiday Present!!!!!

Ahhh but I digress as that's been discussed to death in other threads and it ain't gonna happen anytime soon.... {sigh}
ID: 15780 · Report as offensive     Reply Quote
Profile Ocean Archer
Avatar

Send message
Joined: 13 Jul 05
Posts: 143
Credit: 263,300
RAC: 0
Message 15781 - Posted: 10 Dec 2006, 18:28:13 UTC

OK -- It is now 1825UTC, and this packet of WUs is committed to the crunchers. Usual response time for a reply is a week, so I expect that it will be at least a week before we see any residual work from WUs that error out or otherwise get returned "incomplete".

Hope everybody got their fill ....


If I've lived this long, I've gotta be that old
ID: 15781 · Report as offensive     Reply Quote
Profile [B^S] Molzahn

Send message
Joined: 21 Jan 06
Posts: 46
Credit: 174,756
RAC: 0
Message 15782 - Posted: 10 Dec 2006, 18:51:36 UTC

I am equally elated with the WU's to crunch :)

I hope everyone, especially the frequent posters / LHC's most active proponents got some while they lasted.

The incurable optimist,
Michael D. Molzahn

blog pictures
ID: 15782 · Report as offensive     Reply Quote
Adam23

Send message
Joined: 30 May 06
Posts: 40
Credit: 219,288
RAC: 10
Message 15783 - Posted: 10 Dec 2006, 20:17:23 UTC

I am quite lucky so far, I got lots of WUs last two batches :-)
ID: 15783 · Report as offensive     Reply Quote
Profile Morgan the Gold
Avatar

Send message
Joined: 18 Sep 04
Posts: 38
Credit: 173,867
RAC: 0
Message 15795 - Posted: 12 Dec 2006, 6:35:35 UTC

I got 3 wu 1 each on 3 machines :)
ID: 15795 · Report as offensive     Reply Quote
Profile Ocean Archer
Avatar

Send message
Joined: 13 Jul 05
Posts: 143
Credit: 263,300
RAC: 0
Message 15797 - Posted: 12 Dec 2006, 12:40:13 UTC

Morgan --

You're not the only one who seemed to get very few WUs out of the last batch - yet others got plenty. I wonder why? Are the settings on the various machines to blame, or can some of our fellow crunchers "encourage" the system to give out larger/more blocks of data?

Any thoughts/comments out there?





If I've lived this long, I've gotta be that old
ID: 15797 · Report as offensive     Reply Quote
Profile sysfried

Send message
Joined: 27 Sep 04
Posts: 282
Credit: 1,415,417
RAC: 0
Message 15798 - Posted: 12 Dec 2006, 13:17:48 UTC - in response to Message 15797.  

Morgan --

You're not the only one who seemed to get very few WUs out of the last batch - yet others got plenty. I wonder why? Are the settings on the various machines to blame, or can some of our fellow crunchers "encourage" the system to give out larger/more blocks of data?

Any thoughts/comments out there?


Propably enough WU's from other projects in the queue...

Or ask River~~ ;-)

Cheers,

sysfried

BTW: due to heavy PC re-construction.... I missed the batch of new work... *booohoooo*
ID: 15798 · Report as offensive     Reply Quote
William Timbrook

Send message
Joined: 22 Sep 05
Posts: 21
Credit: 6,350,753
RAC: 34
Message 15799 - Posted: 12 Dec 2006, 16:06:32 UTC - in response to Message 15798.  
Last modified: 12 Dec 2006, 16:09:38 UTC

I cheated. My 3 machines at work picked up 7. For my home machine, I noticed it was doing a LHC packet so I suspended it and let it download another one. I did this 10 times for a total of 11 packets on my home machine. 6 of these packets were less than 30 seconds each. Doing this had an adverse effect on my rosetta and world grid processing for this machine, but ...


William


ID: 15799 · Report as offensive     Reply Quote
Profile [B^S] Molzahn

Send message
Joined: 21 Jan 06
Posts: 46
Credit: 174,756
RAC: 0
Message 15801 - Posted: 12 Dec 2006, 20:20:07 UTC
Last modified: 12 Dec 2006, 20:31:49 UTC

A response to some with fewer WU's and some with plenty:

Another way of going about getting more, something i no longer do and wouldn't encourage, you can suspend all other projects, turn the time between contacts up (like from .4 days to 2 or 3 days), contact LHC servers again, and it will fill up with several days worth of work.

I realized how bad an idea this was when work was starting to become scarce a while back; others were sad about not getting work and it became all too apparent how unfair that was.

I strongly discourage you from to 'exploiting' the servers this way.

The incurable optimist,
Mike

blog pictures
ID: 15801 · Report as offensive     Reply Quote
PovAddict
Avatar

Send message
Joined: 14 Jul 05
Posts: 275
Credit: 49,291
RAC: 0
Message 15802 - Posted: 12 Dec 2006, 20:30:45 UTC - in response to Message 15801.  

and it will fill up with several days worth of work.

And then the scientists have to wait five times as long to get things done. And, the workunits may even reach deadline. And, more people with few or no work.

I hope we get the new admins soon, and they add a cache limit to the scheduler - along with all the other things that have to be done (host duplication, stats...)

ID: 15802 · Report as offensive     Reply Quote
Profile [B^S] Molzahn

Send message
Joined: 21 Jan 06
Posts: 46
Credit: 174,756
RAC: 0
Message 15803 - Posted: 12 Dec 2006, 20:34:58 UTC - in response to Message 15802.  

I hope we get the new admins soon, and they add a cache limit to the scheduler - along with all the other things that have to be done (host duplication, stats...)

Amen to that! I probably shouldn't have mentioned this, but you can easily tell who the major offenders of this 'exploit' are. (repeated contacts and downloading units soon after others/initial contact.)

Mike

blog pictures
ID: 15803 · Report as offensive     Reply Quote
Profile Ocean Archer
Avatar

Send message
Joined: 13 Jul 05
Posts: 143
Credit: 263,300
RAC: 0
Message 15805 - Posted: 13 Dec 2006, 1:37:32 UTC

Welll, lessee here --

First WU received from LHC --> 12/10/06, 10:01:06 UTC

Last WU returned back to LHC --> 12/12/06, 04:16:04 UTC


That's a spread of roughly 42 hours (less than 2 days) to do the work. That means that all work assigned to my machine(s) was accomplished and returned in less than 30% of the allotted time. True, I didn't get as many WUs as others, but I'm willing to let the program work as it's designed -- nuff said



If I've lived this long, I've gotta be that old
ID: 15805 · Report as offensive     Reply Quote
William Timbrook

Send message
Joined: 22 Sep 05
Posts: 21
Credit: 6,350,753
RAC: 34
Message 15806 - Posted: 13 Dec 2006, 3:27:33 UTC
Last modified: 13 Dec 2006, 3:29:05 UTC

For my home machine, which is the slowest I run on is a P4 2.8 - no hyperthreading, and a 533fsb bus (old but solid motherboard).

first one down : 10 Dec 2006 11:18:31 UTC
first one back up: 11 Dec 2006 2:04:40 UTC

last one down : 10 Dec 2006 17:42:39 UTC - 17:39 is when I intervened.
last one back up : 11 Dec 2006 11:21:17 UTC


William
ID: 15806 · Report as offensive     Reply Quote
Profile [B^S] Molzahn

Send message
Joined: 21 Jan 06
Posts: 46
Credit: 174,756
RAC: 0
Message 15807 - Posted: 13 Dec 2006, 4:53:21 UTC
Last modified: 13 Dec 2006, 5:53:14 UTC

Edit: I essentially deleted a very long post about spotting the problem of scheduler exploitation, why it's an important problem and views on others machines. It really wasn't needed so i deleted it. I was mostly pointing out problems with hosts and showing examples on spotting and how it's done. (essentially points that i would be overstating)

I was preparing for a final tomorrow in 'The History of Southeast Asia' and the tedium got to me, so posting was a nice break from that.

the deleter of pointless posts and the incurably optimistic,
Mike

blog pictures
ID: 15807 · Report as offensive     Reply Quote
Profile Morgan the Gold
Avatar

Send message
Joined: 18 Sep 04
Posts: 38
Credit: 173,867
RAC: 0
Message 15826 - Posted: 17 Dec 2006, 6:27:52 UTC

I'm happy with 1 wu my machines are set that way on purpose ;)
I'd prefer everyone get 1 wu than only some of get some.
ID: 15826 · Report as offensive     Reply Quote
KAMasud

Send message
Joined: 7 Oct 06
Posts: 114
Credit: 23,192
RAC: 0
Message 15844 - Posted: 20 Dec 2006, 12:58:06 UTC


:-) What i do is at night before going to bed, i mark all the other projects "No further work" and go to sleep leaving the machine to knock at the project. The work cache is also kept a little less then normal :-)Over the last two weeks i keep getting a few WU"s, not much but enough.
Regards
Masud.
ID: 15844 · Report as offensive     Reply Quote
Profile Ocean Archer
Avatar

Send message
Joined: 13 Jul 05
Posts: 143
Credit: 263,300
RAC: 0
Message 15847 - Posted: 20 Dec 2006, 14:12:56 UTC

From what I can see, it looks like this batch of work became available about 10:00 UTC on the 19th, and finished distribution about 00:00 UTC on the 20th -- that's a spread of fourteen hours for the program to request new work from LHC. Assuming that the programs weren't set to the "No New Work" or "Suspended" settings, that seems like plenty of time to get a WU or few.


If I've lived this long, I've gotta be that old
ID: 15847 · Report as offensive     Reply Quote

Message boards : Number crunching : New Work so get it while you can....!


©2024 CERN