Message boards : Number crunching : The Anticipation is Growing......
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Profile The Gas Giant

Send message
Joined: 2 Sep 04
Posts: 309
Credit: 715,258
RAC: 0
Message 7817 - Posted: 24 May 2005, 23:56:31 UTC

After the front page message
---------------
20.5.2005 12:39 UTC
Good News: The latest run has finished successfully. Two of the five studies we also ran locally. The studies run on LHC@home are 100% identical to the control studies.
Some new scripts have been developed which hopefully should increase the speed and ease of use of the job submission.
Finally we are planning a big study of the beam to beam effect, hopefully starting next week.
-------------
It is now next week, so my anticipation is growing for the one million wu's being released. Bring it on!

Live long and crunch!

Paul
(S@H1 8888)
BOINC/SAH BETA
ID: 7817 · Report as offensive     Reply Quote
Profile sysfried

Send message
Joined: 27 Sep 04
Posts: 282
Credit: 1,415,417
RAC: 0
Message 7819 - Posted: 25 May 2005, 6:48:42 UTC - in response to Message 7817.  

> It is now next week, so my anticipation is growing for the one million wu's
> being released. Bring it on!
>
> Live long and crunch!
>
I agree... :-)
ID: 7819 · Report as offensive     Reply Quote
ralic

Send message
Joined: 2 Sep 04
Posts: 28
Credit: 44,344
RAC: 0
Message 7820 - Posted: 25 May 2005, 7:58:54 UTC - in response to Message 7817.  

> It is now next week, so my anticipation is growing for the one million wu's
> being released. Bring it on!

It's kinda like watching the kettle, waiting for it to boil. The harder you stare at it, the slower it boils. :-)
ID: 7820 · Report as offensive     Reply Quote
STE\/E

Send message
Joined: 2 Sep 04
Posts: 352
Credit: 1,748,908
RAC: 1,606
Message 7821 - Posted: 25 May 2005, 8:27:37 UTC

It's kinda like watching the kettle, waiting for it to boil. The harder you stare at it, the slower it boils. :-)
==========

I'm still stirring the Pot a little, I still haven't run out of WU's but I'm probably going to today on most of my Computers ... :P
ID: 7821 · Report as offensive     Reply Quote
Profile The Gas Giant

Send message
Joined: 2 Sep 04
Posts: 309
Credit: 715,258
RAC: 0
Message 7822 - Posted: 25 May 2005, 10:57:16 UTC - in response to Message 7821.  

> I'm still stirring the Pot a little, I still haven't run out of WU's but I'm
> probably going to today on most of my Computers ... :P
>
There are names for folks like you PoorBoy and in this case none of them nice! ;P

Paul.
ID: 7822 · Report as offensive     Reply Quote
STE\/E

Send message
Joined: 2 Sep 04
Posts: 352
Credit: 1,748,908
RAC: 1,606
Message 7823 - Posted: 25 May 2005, 11:00:38 UTC

There are names for folks like you PoorBoy and in this case none of them nice! ;P Paul.
=========

Would you expect any less from the Residing LHC@Home Schmuck ... :P
ID: 7823 · Report as offensive     Reply Quote
Profile Juerschi

Send message
Joined: 26 Oct 04
Posts: 12
Credit: 8,909
RAC: 0
Message 7825 - Posted: 25 May 2005, 12:11:18 UTC - in response to Message 7817.  


> Finally we are planning a big study of the beam to beam effect, hopefully
> starting next week.

Unless I am very much mistaken there is mentioned "hopefully starting next week".

I would also prefer a start ASAP, because one of my boxes, which is attached only to Einstein und LHC has run out of WU's and Einstein is unreachable at the moment.

So I hope we will have a bunch of WU's soon from LHC


ID: 7825 · Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 2 Sep 04
Posts: 545
Credit: 148,912
RAC: 0
Message 7826 - Posted: 25 May 2005, 13:33:35 UTC

I too have one computer that seems to have run dry with the exxeption of the CPDN work unit which it is munching on right now.

We are in another "streak" where:

1) CPDN is Ok
2) Einstein@Home is off-the-air for maintenance
3) LCH@Home is between studies
4) Predictor@Home is fixing the servers and Work Units are low
5) SETI@Home - ??? but I can't seem to get work though it says it has it ...

So, I hate this! :)
ID: 7826 · Report as offensive     Reply Quote
STE\/E

Send message
Joined: 2 Sep 04
Posts: 352
Credit: 1,748,908
RAC: 1,606
Message 7827 - Posted: 25 May 2005, 15:05:26 UTC - in response to Message 7826.  

> I too have one computer that seems to have run dry with the exxeption of the
> CPDN work unit which it is munching on right now.
>
> We are in another "streak" where:
>
> 1) CPDN is Ok
> 2) Einstein@Home is off-the-air for maintenance
> 3) LCH@Home is between studies
> 4) Predictor@Home is fixing the servers and Work Units are low
> 5) SETI@Home - ??? but I can't seem to get work though it says it has it ...
>
> So, I hate this! :)
=========

Whatever happened to the Great Idea of all these Great Projects to run ... ;)
ID: 7827 · Report as offensive     Reply Quote
Profile littleBouncer
Avatar

Send message
Joined: 23 Oct 04
Posts: 358
Credit: 1,439,205
RAC: 0
Message 7828 - Posted: 25 May 2005, 15:50:24 UTC - in response to Message 7827.  

> > I too have one computer that seems to have run dry with the exxeption of
> the
> > CPDN work unit which it is munching on right now.
> >
> > We are in another "streak" where:
> >
> > 1) CPDN is Ok
> > 2) Einstein@Home is off-the-air for maintenance
> > 3) LCH@Home is between studies
> > 4) Predictor@Home is fixing the servers and Work Units are low
> > 5) SETI@Home - ??? but I can't seem to get work though it says it has it
> ...
> >
> > So, I hate this! :)
> =========
>
> Whatever happened to the Great Idea of all these Great Projects to run ... ;)
>
Seems there turns a 'nasty project-worm' around.
Is one project OK then the next will be affected and so on....
ID: 7828 · Report as offensive     Reply Quote
ric

Send message
Joined: 17 Sep 04
Posts: 190
Credit: 649,637
RAC: 0
Message 7830 - Posted: 25 May 2005, 20:20:26 UTC - in response to Message 7828.  

1) CPDN is Ok
2) Einstein@Home is off-the-air for maintenance
3) LCH@Home is between studies
4) Predictor@Home is fixing the servers and Work Units are low
5) SETI@Home - ??? but I can't seem to get work though it says it has it ...


..
..
6) Pirates out of work
7) burp out of work
8) LHC alfa plenty of work, no account creation

>Seems there turns a 'nasty project-worm' around.
>Is one project OK then the next will be affected and so on....

@LB, not absolutely sure, it looks like everytime the moon is full, the worm(s) is/are more active.
ID: 7830 · Report as offensive     Reply Quote
Profile littleBouncer
Avatar

Send message
Joined: 23 Oct 04
Posts: 358
Credit: 1,439,205
RAC: 0
Message 7832 - Posted: 25 May 2005, 22:07:15 UTC - in response to Message 7830.  

> 1) CPDN is Ok
> 2) Einstein@Home is off-the-air for maintenance
> 3) LCH@Home is between studies
> 4) Predictor@Home is fixing the servers and Work Units are low
> 5) SETI@Home - ??? but I can't seem to get work though it says it has it ...
>
>
> ..
> ..
> 6) Pirates out of work
> 7) burp out of work
> 8) LHC alfa plenty of work, no account creation
>
> >Seems there turns a 'nasty project-worm' around.
> >Is one project OK then the next will be affected and so on....
>
> @LB, not absolutely sure, it looks like everytime the moon is full, the
> worm(s) is/are more active.
>
That's also a possibility... 'the lunaticks are in my head...' PinkFloyd
ID: 7832 · Report as offensive     Reply Quote
Profile The Gas Giant

Send message
Joined: 2 Sep 04
Posts: 309
Credit: 715,258
RAC: 0
Message 7845 - Posted: 27 May 2005, 21:20:59 UTC

My anticipation is almost at fever pitch! Were we getting the wu's or was it just an in house study of the beam-to-beam effect?

Live long and crunch!

Paul.
ID: 7845 · Report as offensive     Reply Quote
Profile Thierry Van Driessche
Avatar

Send message
Joined: 1 Sep 04
Posts: 157
Credit: 82,604
RAC: 0
Message 7891 - Posted: 31 May 2005, 18:46:30 UTC

News people:

31.5.2005 10:12 UTC
The new submission scripts caused some name mangling when jobs returned. Making it hard to figure out which result belonged to what job. This should be resolved now and the new scripts have been installed on the production server.
If no further complications arise we will launch the first study tomorrow. It will be about 30'000 1'000'000 turn jobs.
ID: 7891 · Report as offensive     Reply Quote
Grenadier
Avatar

Send message
Joined: 2 Sep 04
Posts: 39
Credit: 441,128
RAC: 0
Message 7892 - Posted: 31 May 2005, 20:18:31 UTC

Here's hoping they dodge the bug that Predictor tripped on with the newer clients. Whenever a PP@H unit pauses, it basically hangs. From the looks of things, they just rolled back to an older client in order to fix things, as it was at least partly the PP client that was at fault.
ID: 7892 · Report as offensive     Reply Quote
Profile adrianxw

Send message
Joined: 29 Sep 04
Posts: 187
Credit: 705,487
RAC: 0
Message 7898 - Posted: 1 Jun 2005, 9:10:16 UTC

P@H have reverted to the 4.28 client. They advise anyone with queued work using clients later then 4.28 to abort them and get new work.

Wave upon wave of demented avengers march cheerfully out of obscurity into the dream.
ID: 7898 · Report as offensive     Reply Quote
ralic

Send message
Joined: 2 Sep 04
Posts: 28
Credit: 44,344
RAC: 0
Message 7904 - Posted: 2 Jun 2005, 7:34:32 UTC - in response to Message 7891.  

> If no further complications arise we will launch the first study tomorrow. It
> will be about 30'000 1'000'000 turn jobs.


It's the day after tomorrow, so there must have been some complications....

(Sorry, somebody had to do it. :)
ID: 7904 · Report as offensive     Reply Quote
ric

Send message
Joined: 17 Sep 04
Posts: 190
Credit: 649,637
RAC: 0
Message 7905 - Posted: 2 Jun 2005, 9:21:24 UTC - in response to Message 7904.  
Last modified: 2 Jun 2005, 9:23:41 UTC

Please stop now bad mouthing!

The folk of LHC is doing their best, giv em time to solve what must be solved.

A member of our team found THE solution of the enigma "tomrrow"

--> like the movie "tomorrow never dies", there will be always a tomorrow behind tomorrow.

Sure we are passing one of the longest tomorrow ever seen, but
may I remember the announced "1 Hour" of maintenance outtime at Predictor?

This single hour was also running over the full weekend.

In an other task, sysfried was trying to put the boinc logic of time.

Nearly one year ago, at june 22th, I converted to boinc.

Since then, I had to learn, boinc is a wonderfull thing,
it just operates under a seperate timezone,
ALL Boinc Projects are suffering from this.

This timezone can't be explaned with legacy science.

patiently waiting


ID: 7905 · Report as offensive     Reply Quote
Profile Chrulle

Send message
Joined: 27 Jul 04
Posts: 182
Credit: 1,880
RAC: 0
Message 7906 - Posted: 2 Jun 2005, 9:31:57 UTC

Well, once again we are stumped.

On the XP sp1 and XP sp2 and 2000 sp 4. The results we get back are strange. The numerical value of the results are the same, but while we on other platforms the representation of 0 is 15 digits and a E on the afore mentioned platforms we get 16 digits and a e. This is not really a problem because the value is the same just the representation is different. So we simply have to change our post processing checks to actually parse the results instead of just running diff.

We are worried though, because we do not understand why this is happening, and although we have not seen it affecting any of the real numbers yet it might still happen. We are doing static linking so there should be no reason for this to happen.


Chrulle
Research Assistant & Ex-LHC@home developer
Niels Bohr Institute
ID: 7906 · Report as offensive     Reply Quote
computerguy09

Send message
Joined: 26 Oct 04
Posts: 6
Credit: 1,696,248
RAC: 0
Message 7907 - Posted: 2 Jun 2005, 12:15:36 UTC - in response to Message 7898.  

> P@H have reverted to the 4.28 client. They advise anyone with queued work
> using clients later then 4.28 to abort them and get new work.
>

Actually, P@H didn't chance which version of the BOINC client they're recommending, just the version of the P@H application (MFOLD)

Mark
ID: 7907 · Report as offensive     Reply Quote
1 · 2 · Next

Message boards : Number crunching : The Anticipation is Growing......


©2024 CERN