Message boards :
Number crunching :
The Anticipation is Growing......
Message board moderation
Author | Message |
---|---|
Send message Joined: 2 Sep 04 Posts: 309 Credit: 715,258 RAC: 0 |
After the front page message --------------- 20.5.2005 12:39 UTC Good News: The latest run has finished successfully. Two of the five studies we also ran locally. The studies run on LHC@home are 100% identical to the control studies. Some new scripts have been developed which hopefully should increase the speed and ease of use of the job submission. Finally we are planning a big study of the beam to beam effect, hopefully starting next week. ------------- It is now next week, so my anticipation is growing for the one million wu's being released. Bring it on! Live long and crunch! Paul (S@H1 8888) BOINC/SAH BETA |
Send message Joined: 27 Sep 04 Posts: 282 Credit: 1,415,417 RAC: 0 |
> It is now next week, so my anticipation is growing for the one million wu's > being released. Bring it on! > > Live long and crunch! > I agree... :-) |
Send message Joined: 2 Sep 04 Posts: 28 Credit: 44,344 RAC: 0 |
> It is now next week, so my anticipation is growing for the one million wu's > being released. Bring it on! It's kinda like watching the kettle, waiting for it to boil. The harder you stare at it, the slower it boils. :-) |
Send message Joined: 2 Sep 04 Posts: 352 Credit: 1,393,150 RAC: 0 |
It's kinda like watching the kettle, waiting for it to boil. The harder you stare at it, the slower it boils. :-) ========== I'm still stirring the Pot a little, I still haven't run out of WU's but I'm probably going to today on most of my Computers ... :P |
Send message Joined: 2 Sep 04 Posts: 309 Credit: 715,258 RAC: 0 |
> I'm still stirring the Pot a little, I still haven't run out of WU's but I'm > probably going to today on most of my Computers ... :P > There are names for folks like you PoorBoy and in this case none of them nice! ;P Paul. |
Send message Joined: 2 Sep 04 Posts: 352 Credit: 1,393,150 RAC: 0 |
There are names for folks like you PoorBoy and in this case none of them nice! ;P Paul. ========= Would you expect any less from the Residing LHC@Home Schmuck ... :P |
Send message Joined: 26 Oct 04 Posts: 12 Credit: 8,909 RAC: 0 |
> Finally we are planning a big study of the beam to beam effect, hopefully > starting next week. Unless I am very much mistaken there is mentioned "hopefully starting next week". I would also prefer a start ASAP, because one of my boxes, which is attached only to Einstein und LHC has run out of WU's and Einstein is unreachable at the moment. So I hope we will have a bunch of WU's soon from LHC |
Send message Joined: 2 Sep 04 Posts: 545 Credit: 148,912 RAC: 0 |
I too have one computer that seems to have run dry with the exxeption of the CPDN work unit which it is munching on right now. We are in another "streak" where: 1) CPDN is Ok 2) Einstein@Home is off-the-air for maintenance 3) LCH@Home is between studies 4) Predictor@Home is fixing the servers and Work Units are low 5) SETI@Home - ??? but I can't seem to get work though it says it has it ... So, I hate this! :) |
Send message Joined: 2 Sep 04 Posts: 352 Credit: 1,393,150 RAC: 0 |
> I too have one computer that seems to have run dry with the exxeption of the > CPDN work unit which it is munching on right now. > > We are in another "streak" where: > > 1) CPDN is Ok > 2) Einstein@Home is off-the-air for maintenance > 3) LCH@Home is between studies > 4) Predictor@Home is fixing the servers and Work Units are low > 5) SETI@Home - ??? but I can't seem to get work though it says it has it ... > > So, I hate this! :) ========= Whatever happened to the Great Idea of all these Great Projects to run ... ;) |
Send message Joined: 23 Oct 04 Posts: 358 Credit: 1,439,205 RAC: 0 |
> > I too have one computer that seems to have run dry with the exxeption of > the > > CPDN work unit which it is munching on right now. > > > > We are in another "streak" where: > > > > 1) CPDN is Ok > > 2) Einstein@Home is off-the-air for maintenance > > 3) LCH@Home is between studies > > 4) Predictor@Home is fixing the servers and Work Units are low > > 5) SETI@Home - ??? but I can't seem to get work though it says it has it > ... > > > > So, I hate this! :) > ========= > > Whatever happened to the Great Idea of all these Great Projects to run ... ;) > Seems there turns a 'nasty project-worm' around. Is one project OK then the next will be affected and so on.... |
Send message Joined: 17 Sep 04 Posts: 190 Credit: 649,637 RAC: 0 |
1) CPDN is Ok 2) Einstein@Home is off-the-air for maintenance 3) LCH@Home is between studies 4) Predictor@Home is fixing the servers and Work Units are low 5) SETI@Home - ??? but I can't seem to get work though it says it has it ... .. .. 6) Pirates out of work 7) burp out of work 8) LHC alfa plenty of work, no account creation >Seems there turns a 'nasty project-worm' around. >Is one project OK then the next will be affected and so on.... @LB, not absolutely sure, it looks like everytime the moon is full, the worm(s) is/are more active. |
Send message Joined: 23 Oct 04 Posts: 358 Credit: 1,439,205 RAC: 0 |
> 1) CPDN is Ok > 2) Einstein@Home is off-the-air for maintenance > 3) LCH@Home is between studies > 4) Predictor@Home is fixing the servers and Work Units are low > 5) SETI@Home - ??? but I can't seem to get work though it says it has it ... > > > .. > .. > 6) Pirates out of work > 7) burp out of work > 8) LHC alfa plenty of work, no account creation > > >Seems there turns a 'nasty project-worm' around. > >Is one project OK then the next will be affected and so on.... > > @LB, not absolutely sure, it looks like everytime the moon is full, the > worm(s) is/are more active. > That's also a possibility... 'the lunaticks are in my head...' PinkFloyd |
Send message Joined: 2 Sep 04 Posts: 309 Credit: 715,258 RAC: 0 |
My anticipation is almost at fever pitch! Were we getting the wu's or was it just an in house study of the beam-to-beam effect? Live long and crunch! Paul. |
Send message Joined: 1 Sep 04 Posts: 157 Credit: 82,604 RAC: 0 |
News people: 31.5.2005 10:12 UTC The new submission scripts caused some name mangling when jobs returned. Making it hard to figure out which result belonged to what job. This should be resolved now and the new scripts have been installed on the production server. If no further complications arise we will launch the first study tomorrow. It will be about 30'000 1'000'000 turn jobs. |
Send message Joined: 2 Sep 04 Posts: 39 Credit: 441,128 RAC: 0 |
Here's hoping they dodge the bug that Predictor tripped on with the newer clients. Whenever a PP@H unit pauses, it basically hangs. From the looks of things, they just rolled back to an older client in order to fix things, as it was at least partly the PP client that was at fault. |
Send message Joined: 29 Sep 04 Posts: 187 Credit: 705,487 RAC: 0 |
P@H have reverted to the 4.28 client. They advise anyone with queued work using clients later then 4.28 to abort them and get new work. Wave upon wave of demented avengers march cheerfully out of obscurity into the dream. |
Send message Joined: 2 Sep 04 Posts: 28 Credit: 44,344 RAC: 0 |
> If no further complications arise we will launch the first study tomorrow. It > will be about 30'000 1'000'000 turn jobs. It's the day after tomorrow, so there must have been some complications.... (Sorry, somebody had to do it. :) |
Send message Joined: 17 Sep 04 Posts: 190 Credit: 649,637 RAC: 0 |
Please stop now bad mouthing! The folk of LHC is doing their best, giv em time to solve what must be solved. A member of our team found THE solution of the enigma "tomrrow" --> like the movie "tomorrow never dies", there will be always a tomorrow behind tomorrow. Sure we are passing one of the longest tomorrow ever seen, but may I remember the announced "1 Hour" of maintenance outtime at Predictor? This single hour was also running over the full weekend. In an other task, sysfried was trying to put the boinc logic of time. Nearly one year ago, at june 22th, I converted to boinc. Since then, I had to learn, boinc is a wonderfull thing, it just operates under a seperate timezone, ALL Boinc Projects are suffering from this. This timezone can't be explaned with legacy science. patiently waiting |
Send message Joined: 27 Jul 04 Posts: 182 Credit: 1,880 RAC: 0 |
Well, once again we are stumped. On the XP sp1 and XP sp2 and 2000 sp 4. The results we get back are strange. The numerical value of the results are the same, but while we on other platforms the representation of 0 is 15 digits and a E on the afore mentioned platforms we get 16 digits and a e. This is not really a problem because the value is the same just the representation is different. So we simply have to change our post processing checks to actually parse the results instead of just running diff. We are worried though, because we do not understand why this is happening, and although we have not seen it affecting any of the real numbers yet it might still happen. We are doing static linking so there should be no reason for this to happen. Chrulle Research Assistant & Ex-LHC@home developer Niels Bohr Institute |
Send message Joined: 26 Oct 04 Posts: 6 Credit: 1,696,248 RAC: 0 |
> P@H have reverted to the 4.28 client. They advise anyone with queued work > using clients later then 4.28 to abort them and get new work. > Actually, P@H didn't chance which version of the BOINC client they're recommending, just the version of the P@H application (MFOLD) Mark |
©2024 CERN