Message boards : Number crunching : I think we should restrict work units
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 6 · 7 · 8 · 9 · 10 · 11 · Next

AuthorMessage
Rob Lilley

Send message
Joined: 29 Nov 05
Posts: 8
Credit: 105,015
RAC: 0
Message 14255 - Posted: 7 Jul 2006, 9:40:53 UTC

checked the progect last night NO NEW WORK 10 W/U IN PROGRESS (roughly)

checked the project this morning NO NEW WORK 10308 W/U IN PROGRESS ?????



Same here, and I always have LHC set to allow new work - I must have blinked...
Seems as though the only way to get any LHC units is to have your machine running and connected to the Internet 24/7

What's worse is that I run three other projects and I'm about 3 hours from running completely dry.


Well, I keep my processor warm by being connected to seven projects: LHC, Rosetta, Einstein, Predictor, QMC, Seti and Xtremlab (i.e. effectively six :-) ). This gives a mixture of long and short deadlines, and BOINC suspends work fetch and uses earliest deadline first scheduling if there's ever any danger of being overcommitted. It works for me anyway, as I tend to have my machine on for most of the day, and I haven't missed a deadline since the days when I was using a 266MhZ PII running Seti@home classic only.

My advice, for what it's worth, is if you always want work, don't give up on LHC, just connect up to as many other projects in addition to LHC as it takes to keep your machine occupied. For those who care about credits, you may not get as many credits for LHC, but you will get more overall, and I find you don't get stressed if one or two projects are down or don't have work.
ID: 14255 · Report as offensive     Reply Quote
Profile John Hunt

Send message
Joined: 13 Jul 05
Posts: 133
Credit: 162,641
RAC: 0
Message 14258 - Posted: 7 Jul 2006, 10:45:00 UTC
Last modified: 7 Jul 2006, 10:45:38 UTC

I agree with Rob.

I'm attached to nine projects -


At this moment in time, I have a couple of LHC WUs to crunch, I'm waiting for work from Predictor and Rosetta, HashClash has no work. I'm also waiting for SETI to come back online so I can upload completed WUs and get some more.

Life is good......

ID: 14258 · Report as offensive     Reply Quote
Bob Guy

Send message
Joined: 28 Sep 05
Posts: 21
Credit: 11,715
RAC: 0
Message 14274 - Posted: 9 Jul 2006, 23:06:24 UTC - in response to Message 14230.  
Last modified: 9 Jul 2006, 23:06:56 UTC

I am glad to see that there are people around that are just as stubborn as I am.

There is another name for this attitude, it is called greed.

This is what I hear when I read your post:

I want more.
Give me more.
I have a right to get more.

Just because the project doesn't equitably distribute work and permits the large caches of WUs does not mean that it is right or fair.
ID: 14274 · Report as offensive     Reply Quote
m.mitch

Send message
Joined: 4 Sep 05
Posts: 112
Credit: 1,832,123
RAC: 0
Message 14276 - Posted: 10 Jul 2006, 7:27:26 UTC - in response to Message 14274.  

Just because the project doesn't equitably distribute work and permits the large caches of WUs does not mean that it is right or fair.


(1) What does it mean then?
(2) How can you know whether anyone uses a large cache or not?
(3) How is it unfair to you?

ID: 14276 · Report as offensive     Reply Quote
Profile peterthomas

Send message
Joined: 5 Sep 05
Posts: 23
Credit: 15,496
RAC: 0
Message 14279 - Posted: 10 Jul 2006, 11:18:16 UTC

Hows the head Mike, got a concussion yet.

It appears that you are banging your head against a wall.

Some people just seam to want to gripe about the way this project is set up.

They WILL NOT LISTEN when told that the project staff have OTHER duties at CERN and LHC than to provide them with work.

The results from the work we do, regardless of who does it, is used to design and implement the construction, and comissioning of the LHC. To them this process is instantanious but in reality it takes time. Maybe even weeks, shock horror.

At one time I too was upset at the way this project operates, But I did some research and came to realise that because this IS REAL WORLD science and NOT THEORETICAL that it takes TIME.

So once again I will say what has been said MANY times.
If the project admin were not happy with the way the project is running they would change it.

Really all that this discussion is doing is showing the impatience of some of those who donate their computer time.
ID: 14279 · Report as offensive     Reply Quote
Bob Guy

Send message
Joined: 28 Sep 05
Posts: 21
Credit: 11,715
RAC: 0
Message 14281 - Posted: 10 Jul 2006, 12:40:50 UTC - in response to Message 14276.  

Just because the project doesn't equitably distribute work and permits the large caches of WUs does not mean that it is right or fair.


(1) What does it mean then?

Regarding equitable distribution: The project is interested in getting their WUs crunched. Who crunches them is not important, at least to the project developers, it seems. They are not the WU police, which is probably as it should be. They have better things to do than to be paying attention to the fighting children.

(2) How can you know whether anyone uses a large cache or not?

It's been quite obviously stated so by more than one participant here. I assume they're not lying. It's also easy enough to look at the WU history of any particular computer/user in this project. I really don't have a complaint when a user has a dial-up - I do understand that they're just trying to get the best use of their equipment.

(3) How is it unfair to you?

As participants in this community we all have an obligation to be fair, and equitable, to each other. This is the moral and ethical obligation of a civilized society. It can only be fair if all participants are given an equal opportunity to contribute to the project. I understand, and I am sad that this is unlikely to happen.

And just as a comment, your clever questions are the kind lawyers use to obscure the truth.
ID: 14281 · Report as offensive     Reply Quote
m.mitch

Send message
Joined: 4 Sep 05
Posts: 112
Credit: 1,832,123
RAC: 0
Message 14283 - Posted: 10 Jul 2006, 15:10:07 UTC - in response to Message 14279.  
Last modified: 10 Jul 2006, 15:34:43 UTC

Hows the head Mike, got a concussion yet.

It appears that you are banging your head against a wall.

Some people just seam to want to gripe about the way this project is set up.

They WILL NOT LISTEN when told that the project staff have OTHER duties at CERN and LHC than to provide them with work.

The results from the work we do, regardless of who does it, is used to design and implement the construction, and comissioning of the LHC. To them this process is instantanious but in reality it takes time. Maybe even weeks, shock horror.

At one time I too was upset at the way this project operates, But I did some research and came to realise that because this IS REAL WORLD science and NOT THEORETICAL that it takes TIME.

So once again I will say what has been said MANY times.
If the project admin were not happy with the way the project is running they would change it.

Really all that this discussion is doing is showing the impatience of some of those who donate their computer time.


I figure if I ask them enough questions they'll realise they are the problem. Let's see how that goes with the next post ;-) I've been looking forward to this answer!




Click here to join the #1 Aussie Alliance on LHC.
ID: 14283 · Report as offensive     Reply Quote
m.mitch

Send message
Joined: 4 Sep 05
Posts: 112
Credit: 1,832,123
RAC: 0
Message 14284 - Posted: 10 Jul 2006, 15:25:10 UTC - in response to Message 14281.  
Last modified: 10 Jul 2006, 15:37:21 UTC


(1) What does it mean then?

Regarding equitable distribution: The project is interested in getting their WUs crunched. Who crunches them is not important, at least to the project developers, it seems. They are not the WU police, which is probably as it should be. They have better things to do than to be paying attention to the fighting children.

We don't disagree on that.

(2) How can you know whether anyone uses a large cache or not?
It's been quite obviously stated so by more than one participant here. I assume they're not lying. It's also easy enough to look at the WU history of any particular computer/user in this project. I really don't have a complaint when a user has a dial-up - I do understand that they're just trying to get the best use of their equipment.

Okay, now that one doesn't hold water. If we had a large stock of other project WU's, we wouldn't have room for LHC WU's when they become available.

(3) How is it unfair to you?

As participants in this community we all have an obligation to be fair, and equitable, to each other. This is the moral and ethical obligation of a civilised society. It can only be fair if all participants are given an equal opportunity to contribute to the project. I understand, and I am sad that this is unlikely to happen.

And just as a comment, your clever questions are the kind lawyers use to obscure the truth.


Thankyou for suggesting I'm clever. I think you've done a pretty descent job yourself. Dancing around the only conclusion that can be drawn from the constant complaints. "I want work units!"
ID: 14284 · Report as offensive     Reply Quote
Gaspode the UnDressed

Send message
Joined: 1 Sep 04
Posts: 506
Credit: 118,619
RAC: 0
Message 14286 - Posted: 10 Jul 2006, 16:45:01 UTC
Last modified: 10 Jul 2006, 16:45:21 UTC

Bored, bored, bored.

Has anyone anything new to say?


Gaspode the UnDressed
http://www.littlevale.co.uk
ID: 14286 · Report as offensive     Reply Quote
KWSN - A Shrubbery
Avatar

Send message
Joined: 3 Jan 06
Posts: 14
Credit: 32,201
RAC: 0
Message 14287 - Posted: 11 Jul 2006, 3:18:05 UTC - in response to Message 14286.  

Bored, bored, bored.

Has anyone anything new to say?



Oh sure, it's a slow news day:


Life is inherently not fair, get used to it. Any attempt to make life more fair invariably results in making things worse.

There is a reason that Utopia translates to nowhere. It doesn't exist, maybe someday in our far distant future but not now.

The scientists are obviously getting their results in a manner that suits them or they would have changed the rules. And being as everyone operates under the same rules on this project I would have to conclude that the work distribution system is about as fair as is humanly possible.

So, why would people complain?

From the altruistic approach (helping the science as much as possible) everyone should be happy that there are enough contributors to download all the work available, it will clearly get done and not languish on a server somewhere for lack of CPU cycles.

That leaves who gets the work. Obviously some people have faster computers or connections than others so they get a larger portion of work units. Well, it would seem to me that faster computers and connections would also crunch and return the results faster. Sounds like another plus for the project, more people should be happy the project is working so well.

I guess the only remaining complaint is that (insert name here) didn't get their fair share. The only reson to complain about not getting work on your system is simply the way a child reacts when someone else is playing with a toy they want.

Wouldn't that mean that the people who complain the loudest about not having work are truly looking out only for their own interests and not that of the project?

Seriously, get a life, join another project or play within the rules that are given to you and stop complaining about who gets the majority of a limited resource. Be happy that the work is getting done and stop arguing on the internet. Win or lose it's still stupid. (And yes, I see the irony there).
ID: 14287 · Report as offensive     Reply Quote
senatoralex85

Send message
Joined: 17 Sep 05
Posts: 60
Credit: 4,221
RAC: 0
Message 14288 - Posted: 11 Jul 2006, 3:47:58 UTC
Last modified: 11 Jul 2006, 4:02:40 UTC


I am glad to see that there are people around that are just as stubborn as I am.


There is another name for this attitude, it is called greed.

This is what I hear when I read your post:

I want more.
Give me more.
I have a right to get more.

Just because the project doesn't equitably distribute work and permits the large caches of WUs does not mean that it is right or fair.


---------------------------------------------------------------------------

BOBGUY

I find your response to my post interesting. You take one sentence out of my post and take it out of context. On top of that, you seem to make observations that have NO bearing whatsoever. HOW CAN YOU PROVE THAT I AM BEING GREEDY? I do not appreciate your personal attack!

1. It costs me money to crunch the workunits
2. There is wear and tear on my computer components
3. I am not even close to any of the top crunchers on this project or any other seeing that I only have 1 four year old computer.
4. I have not broken any rules nor have I maliciously prevented anyone from getting workunits.

WE ALL HAVE THE SAME CRUNCHING OPPORTUNITY. NO ONE CAN ARGUE THIS POINT!

5. Just like the oil companies, it is a case of supply and demand. If you don't like the system, live in another country that uses other economic controls. Alex in a post below posted this link. Did you read it? http://en.wikipedia.org/wiki/In_soviet_russia I am not trying to criticize you here.

Next time when you read my post do not hear. LISTEN!

The issue here is not about sharing equitably as you claim. The issue is about a shortage of workunits. Anyone on any project can cache workunits. What sets this project apart is the fact that there is not always work. I do not hear anyone one Rosetta complaining about caches. Do you?

I NEVER said I have a right to any workunits. It is not a right, it is a privilege and I realize that. Do you?

If you look at any dictionary, greed is defined as wanting more than what one needs or deserves, especially materialistic wise. As far as I know, I will not become rich crunching workunits nor will I be gaining anything.

Do not attack me on what you hear. Attack me on the facts and information presented.

There is more hunger in this world for peace than there is for bread - Mother Theresa -
ID: 14288 · Report as offensive     Reply Quote
Philip Martin Kryder

Send message
Joined: 21 May 06
Posts: 73
Credit: 8,710
RAC: 0
Message 14291 - Posted: 12 Jul 2006, 4:53:23 UTC

Has anyone asked the CERN folk to increase the replication factor to say, 15?

This would triple the amount of available work.

It would also ensure (virtually) that all quorums would be met expeditiously.

And since the current sentiment of the posters on all sides of this "discussion" seems to be "we want more work - and we are willing to work for free,"
what would be the downside?

Of course the current quorum of 3 and replication of 5 likely meets the project needs, or they would already have increased it...


Again, for the newcomers to the thread, I ask:
"Who is more greedy and why?"
a) A person who has spent money on a fast computer and fast link and has a large cache and is donating the computer time to science.
or
b) A person who is running a slow computer on a slow dial-up link, who has not spent money on upgrades or faster links or faster machines.








ID: 14291 · Report as offensive     Reply Quote
Profile Alex

Send message
Joined: 2 Sep 04
Posts: 378
Credit: 10,765
RAC: 0
Message 14292 - Posted: 12 Jul 2006, 8:22:38 UTC

>Has anyone asked the CERN folk to increase the replication factor to say, 15?

The database would have to track 3 times as many sent out work units. Given the recent Database Overload error, I don't think increasing it to 15 is the answer.

They've received over a thousand of the 1600 magnets. The more important studies will likely occur when surveyors verify the position of those magnets when they're all installed and they have data for a more 'real world' study, and LHC starts planning how to throw the big switch turning everything on.
I'm not the LHC Alex. Just a number cruncher like everyone else here.
ID: 14292 · Report as offensive     Reply Quote
Profile FalconFly
Avatar

Send message
Joined: 2 Sep 04
Posts: 121
Credit: 592,214
RAC: 0
Message 14293 - Posted: 12 Jul 2006, 23:08:21 UTC - in response to Message 14292.  

IMHO there's only one Solution to the "WorkUnit starvation problem" :

Have and make people understand that a LHC WorkUnit is not a 'Holy Grail' that every participant has the 'right to touch and embrace'.

I mean get over it, if there's work... do it... if there's none... work for other Projects in the meantime and periodically check back.

Option 2 :
Install a premium service for 5$ per month that reads the "WorkUnits available" Counter from the Website every hour and immediately EMail/Message/Phone all customers when the Counter is like >1000 or so *g*

Someone ought to be able to make money off all those greedy WorkUnit-Starved folks
Scientific Network : 45000 MHz - 77824 MB - 1970 GB
ID: 14293 · Report as offensive     Reply Quote
Profile Jack H

Send message
Joined: 22 Dec 05
Posts: 27
Credit: 46,565
RAC: 0
Message 14294 - Posted: 13 Jul 2006, 2:27:06 UTC

That made twice that I notice there was very small little work and that obviously I pass to side between 2 communications.
It still should be waited one week so that the 2900 units is run. Indeed, will be necessary to do something.
To make pay would be against philosophy distributed computing.

For the highly-strung persons, my cache is on 0.1! If the units could be made in 2-3 MAXIMUM days, the spirits would warm up less and science would advance!

Thus for the avid ones, to modulate consequently or let work us correctly!


ID: 14294 · Report as offensive     Reply Quote
Profile [B^S] Dora

Send message
Joined: 27 Apr 06
Posts: 26
Credit: 13,559
RAC: 0
Message 14297 - Posted: 13 Jul 2006, 3:02:30 UTC

Here are my messages since last evening. You JUST threw up several thousand new wu's and for the fourth time in a row, I did not get any..... Both my Einstein and Seti are set to no new work. I did not run the benchmarks at 9:51 pm today....

If you don't want me in this project, just say so, but I have been able to get work before.

You are making me sad....

Dora

7/11/06 9:38:54 PM||Starting BOINC client version 5.2.13 for windows_intelx86
7/11/06 9:38:54 PM||libcurl/7.14.0 OpenSSL/0.9.8 zlib/1.2.3
7/11/06 9:38:54 PM||Data directory: C:\\PROGRAM FILES\\BOINC
7/11/06 9:38:55 PM||Processor: 1 GenuineIntel Pentium(r) II Processor
7/11/06 9:38:55 PM||Memory: 119.46 MB physical, 1.88 GB virtual
7/11/06 9:38:55 PM||Disk: 7.85 GB total, 4.20 GB free
7/11/06 9:38:56 PM|SETI@home|Computer ID: 2126086; location: home; project prefs: default
7/11/06 9:38:56 PM|LHC@home|Computer ID: 151456; location: home; project prefs: default
7/11/06 9:38:56 PM|Einstein@Home|Computer ID: 618888; location: ; project prefs: default
7/11/06 9:38:56 PM||General prefs: from LHC@home (last modified 2006-07-07 21:06:49)
7/11/06 9:38:56 PM||General prefs: no separate prefs for home; using your defaults
7/11/06 9:38:57 PM||Remote control not allowed; using loopback address
7/11/06 9:38:57 PM|SETI@home|Resuming computation for result 07ap06ab.21551.11714.884652.3.12_0 using setiathome_enhanced version 515
7/11/06 9:38:59 PM|Einstein@Home|Deferring computation for result h1_0105.0_S5R1__218_S5R1a_0
7/11/06 9:38:59 PM|Einstein@Home|Deferring computation for result h1_0105.0_S5R1__217_S5R1a_0
7/11/06 9:38:59 PM||Using earliest-deadline-first scheduling because computer is overcommitted.
7/11/06 9:38:59 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/11/06 9:38:59 PM|LHC@home|Reason: To fetch work
7/11/06 9:38:59 PM|LHC@home|Requesting 864000 seconds of new work
7/11/06 9:39:00 PM||Couldn't resolve hostname [lhcathome-sched1.cern.ch]
7/11/06 9:39:03 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi failed with a return value of -113
7/11/06 9:39:03 PM|LHC@home|No schedulers responded
7/11/06 9:40:07 PM|LHC@home|Fetching master file
7/11/06 9:40:12 PM|LHC@home|Master file download succeeded
7/11/06 9:40:20 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/11/06 9:40:20 PM|LHC@home|Reason: To fetch work
7/11/06 9:40:20 PM|LHC@home|Requesting 864000 seconds of new work
7/11/06 9:40:28 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/11/06 9:40:28 PM|LHC@home|No work from project
7/11/06 9:41:32 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/11/06 9:41:32 PM|LHC@home|Reason: To fetch work
7/11/06 9:41:32 PM|LHC@home|Requesting 864000 seconds of new work
7/11/06 9:41:42 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/11/06 9:41:42 PM|LHC@home|No work from project
7/11/06 9:42:46 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/11/06 9:42:46 PM|LHC@home|Reason: To fetch work
7/11/06 9:42:46 PM|LHC@home|Requesting 864000 seconds of new work
7/11/06 9:42:51 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/11/06 9:42:51 PM|LHC@home|No work from project
7/11/06 9:43:56 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/11/06 9:43:56 PM|LHC@home|Reason: To fetch work
7/11/06 9:43:56 PM|LHC@home|Requesting 864000 seconds of new work
7/11/06 9:44:01 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/11/06 9:44:01 PM|LHC@home|No work from project
7/11/06 9:45:06 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/11/06 9:45:06 PM|LHC@home|Reason: To fetch work
7/11/06 9:45:06 PM|LHC@home|Requesting 864000 seconds of new work
7/11/06 9:45:11 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/11/06 9:45:11 PM|LHC@home|No work from project
7/11/06 9:46:21 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/11/06 9:46:21 PM|LHC@home|Reason: To fetch work
7/11/06 9:46:21 PM|LHC@home|Requesting 864000 seconds of new work
7/11/06 9:46:32 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/11/06 9:46:32 PM|LHC@home|No work from project
7/11/06 9:48:20 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/11/06 9:48:20 PM|LHC@home|Reason: To fetch work
7/11/06 9:48:20 PM|LHC@home|Requesting 864000 seconds of new work
7/11/06 9:48:30 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/11/06 9:48:30 PM|LHC@home|No work from project
7/11/06 9:56:14 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/11/06 9:56:14 PM|LHC@home|Reason: To fetch work
7/11/06 9:56:14 PM|LHC@home|Requesting 864000 seconds of new work
7/11/06 9:56:19 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/11/06 9:56:19 PM|LHC@home|No work from project
7/11/06 10:22:19 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/11/06 10:22:19 PM|LHC@home|Reason: To fetch work
7/11/06 10:22:19 PM|LHC@home|Requesting 864000 seconds of new work
7/11/06 10:22:46 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/11/06 10:22:46 PM|LHC@home|No work from project
7/12/06 12:12:11 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 12:12:11 AM|LHC@home|Reason: To fetch work
7/12/06 12:12:11 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 12:12:27 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 12:12:27 AM|LHC@home|No work from project
7/12/06 1:04:43 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 1:04:43 AM|LHC@home|Reason: To fetch work
7/12/06 1:04:43 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 1:04:53 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 1:04:53 AM|LHC@home|No work from project
7/12/06 1:05:57 AM|LHC@home|Fetching master file
7/12/06 1:06:03 AM|LHC@home|Master file download succeeded
7/12/06 1:06:08 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 1:06:08 AM|LHC@home|Reason: To fetch work
7/12/06 1:06:08 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 1:06:13 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 1:06:13 AM|LHC@home|No work from project
7/12/06 1:07:17 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 1:07:17 AM|LHC@home|Reason: To fetch work
7/12/06 1:07:17 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 1:07:22 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 1:07:22 AM|LHC@home|No work from project
7/12/06 1:08:27 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 1:08:27 AM|LHC@home|Reason: To fetch work
7/12/06 1:08:27 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 1:08:32 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 1:08:32 AM|LHC@home|No work from project
7/12/06 1:09:36 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 1:09:36 AM|LHC@home|Reason: To fetch work
7/12/06 1:09:36 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 1:10:14 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 1:10:14 AM|LHC@home|No work from project
7/12/06 1:11:18 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 1:11:18 AM|LHC@home|Reason: To fetch work
7/12/06 1:11:18 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 1:11:23 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 1:11:23 AM|LHC@home|No work from project
7/12/06 1:13:32 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 1:13:32 AM|LHC@home|Reason: To fetch work
7/12/06 1:13:32 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 1:13:37 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 1:13:37 AM|LHC@home|No work from project
7/12/06 1:18:20 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 1:18:20 AM|LHC@home|Reason: To fetch work
7/12/06 1:18:20 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 1:21:15 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 1:21:15 AM|LHC@home|No work from project
7/12/06 1:23:38 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 1:23:38 AM|LHC@home|Reason: To fetch work
7/12/06 1:23:38 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 1:23:44 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 1:23:44 AM|LHC@home|No work from project
7/12/06 1:28:26 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 1:28:26 AM|LHC@home|Reason: To fetch work
7/12/06 1:28:26 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 1:28:32 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 1:28:32 AM|LHC@home|No work from project
7/12/06 1:47:24 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 1:47:24 AM|LHC@home|Reason: To fetch work
7/12/06 1:47:24 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 1:47:29 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 1:47:29 AM|LHC@home|No work from project
7/12/06 3:39:30 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 3:39:30 AM|LHC@home|Reason: To fetch work
7/12/06 3:39:30 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 3:39:35 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 3:39:35 AM|LHC@home|No work from project
7/12/06 3:40:39 AM|LHC@home|Fetching master file
7/12/06 3:41:27 AM|LHC@home|Master file download succeeded
7/12/06 3:41:32 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 3:41:32 AM|LHC@home|Reason: To fetch work
7/12/06 3:41:32 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 3:41:38 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 3:41:38 AM|LHC@home|No work from project
7/12/06 3:42:41 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 3:42:41 AM|LHC@home|Reason: To fetch work
7/12/06 3:42:41 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 3:42:47 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 3:42:47 AM|LHC@home|No work from project
7/12/06 3:43:51 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 3:43:51 AM|LHC@home|Reason: To fetch work
7/12/06 3:43:51 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 3:44:02 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 3:44:02 AM|LHC@home|No work from project
7/12/06 3:45:05 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 3:45:05 AM|LHC@home|Reason: To fetch work
7/12/06 3:45:05 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 3:45:11 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 3:45:11 AM|LHC@home|No work from project
7/12/06 3:46:15 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 3:46:15 AM|LHC@home|Reason: To fetch work
7/12/06 3:46:15 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 3:46:20 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 3:46:20 AM|LHC@home|No work from project
7/12/06 3:48:07 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 3:48:07 AM|LHC@home|Reason: To fetch work
7/12/06 3:48:07 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 3:48:12 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 3:48:12 AM|LHC@home|No work from project
7/12/06 3:54:41 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 3:54:41 AM|LHC@home|Reason: To fetch work
7/12/06 3:54:41 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 3:54:46 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 3:54:46 AM|LHC@home|No work from project
7/12/06 4:13:05 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 4:13:05 AM|LHC@home|Reason: To fetch work
7/12/06 4:13:05 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 4:13:10 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 4:13:10 AM|LHC@home|No work from project
7/12/06 4:57:14 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 4:57:14 AM|LHC@home|Reason: To fetch work
7/12/06 4:57:14 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 4:57:20 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 4:57:20 AM|LHC@home|No work from project
7/12/06 6:23:58 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 6:23:58 AM|LHC@home|Reason: To fetch work
7/12/06 6:23:58 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 6:24:14 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 6:24:14 AM|LHC@home|No work from project
7/12/06 7:19:31 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 7:19:31 AM|LHC@home|Reason: To fetch work
7/12/06 7:19:31 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 7:19:36 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 7:19:36 AM|LHC@home|No work from project
7/12/06 7:20:39 AM|LHC@home|Fetching master file
7/12/06 7:20:45 AM|LHC@home|Master file download succeeded
7/12/06 7:20:50 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 7:20:50 AM|LHC@home|Reason: To fetch work
7/12/06 7:20:50 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 7:20:55 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 7:20:55 AM|LHC@home|No work from project
7/12/06 7:22:00 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 7:22:00 AM|LHC@home|Reason: To fetch work
7/12/06 7:22:00 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 7:22:05 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 7:22:05 AM|LHC@home|No work from project
7/12/06 7:23:08 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 7:23:08 AM|LHC@home|Reason: To fetch work
7/12/06 7:23:08 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 7:23:14 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 7:23:14 AM|LHC@home|No work from project
7/12/06 7:24:17 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 7:24:17 AM|LHC@home|Reason: To fetch work
7/12/06 7:24:17 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 7:24:28 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 7:24:28 AM|LHC@home|No work from project
7/12/06 7:25:32 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 7:25:32 AM|LHC@home|Reason: To fetch work
7/12/06 7:25:32 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 7:25:42 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 7:25:42 AM|LHC@home|No work from project
7/12/06 7:26:57 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 7:26:57 AM|LHC@home|Reason: To fetch work
7/12/06 7:26:57 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 7:27:02 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 7:27:02 AM|LHC@home|No work from project
7/12/06 7:28:22 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 7:28:22 AM|LHC@home|Reason: To fetch work
7/12/06 7:28:22 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 7:28:27 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 7:28:27 AM|LHC@home|No work from project
7/12/06 7:39:23 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 7:39:23 AM|LHC@home|Reason: To fetch work
7/12/06 7:39:23 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 7:39:33 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 7:39:33 AM|LHC@home|No work from project
7/12/06 7:47:07 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 7:47:07 AM|LHC@home|Reason: To fetch work
7/12/06 7:47:07 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 7:47:18 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 7:47:18 AM|LHC@home|No work from project
7/12/06 8:26:08 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 8:26:08 AM|LHC@home|Reason: To fetch work
7/12/06 8:26:08 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 8:26:13 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 8:26:13 AM|LHC@home|No work from project
7/12/06 9:17:20 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 9:17:20 AM|LHC@home|Reason: To fetch work
7/12/06 9:17:20 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 9:17:25 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 9:17:25 AM|LHC@home|No work from project
7/12/06 9:18:29 AM|LHC@home|Fetching master file
7/12/06 9:18:34 AM|LHC@home|Master file download succeeded
7/12/06 9:18:40 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 9:18:40 AM|LHC@home|Reason: To fetch work
7/12/06 9:18:40 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 9:18:45 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 9:18:45 AM|LHC@home|No work from project
7/12/06 9:19:48 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 9:19:48 AM|LHC@home|Reason: To fetch work
7/12/06 9:19:48 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 9:19:53 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 9:19:53 AM|LHC@home|No work from project
7/12/06 9:20:58 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 9:20:58 AM|LHC@home|Reason: To fetch work
7/12/06 9:20:58 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 9:21:24 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 9:21:24 AM|LHC@home|No work from project
7/12/06 9:22:27 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 9:22:27 AM|LHC@home|Reason: To fetch work
7/12/06 9:22:27 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 9:23:04 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 9:23:05 AM|LHC@home|No work from project
7/12/06 9:24:07 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 9:24:07 AM|LHC@home|Reason: To fetch work
7/12/06 9:24:07 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 9:24:12 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 9:24:12 AM|LHC@home|No work from project
7/12/06 9:25:31 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 9:25:31 AM|LHC@home|Reason: To fetch work
7/12/06 9:25:31 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 9:25:37 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 9:25:37 AM|LHC@home|No work from project
7/12/06 9:30:48 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 9:30:48 AM|LHC@home|Reason: To fetch work
7/12/06 9:30:48 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 9:30:53 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 9:30:53 AM|LHC@home|No work from project
7/12/06 9:43:47 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 9:43:47 AM|LHC@home|Reason: To fetch work
7/12/06 9:43:47 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 9:43:52 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 9:43:53 AM|LHC@home|No work from project
7/12/06 10:11:27 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 10:11:27 AM|LHC@home|Reason: To fetch work
7/12/06 10:11:27 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 10:11:38 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 10:11:38 AM|LHC@home|No work from project
7/12/06 10:40:57 AM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 10:40:57 AM|LHC@home|Reason: To fetch work
7/12/06 10:40:57 AM|LHC@home|Requesting 864000 seconds of new work
7/12/06 10:41:02 AM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 10:41:02 AM|LHC@home|No work from project
7/12/06 11:54:38 AM||request_reschedule_cpus: process exited
7/12/06 11:54:38 AM|SETI@home|Computation for result 07ap06ab.21551.11714.884652.3.12_0 finished
7/12/06 11:54:41 AM|SETI@home|Started upload of 07ap06ab.21551.11714.884652.3.12_0_0
7/12/06 11:54:52 AM|SETI@home|Finished upload of 07ap06ab.21551.11714.884652.3.12_0_0
7/12/06 11:54:52 AM|SETI@home|Throughput 6366 bytes/sec
7/12/06 12:04:30 PM||request_reschedule_cpus: project op
7/12/06 12:04:35 PM|SETI@home|Sending scheduler request to http://setiboinc.ssl.berkeley.edu/sah_cgi/cgi
7/12/06 12:04:35 PM|SETI@home|Reason: Requested by user
7/12/06 12:04:35 PM|SETI@home|Reporting 1 results
7/12/06 12:04:40 PM|SETI@home|Scheduler request to http://setiboinc.ssl.berkeley.edu/sah_cgi/cgi succeeded
7/12/06 12:04:55 PM||request_reschedule_cpus: project op
7/12/06 12:04:57 PM|Einstein@Home|Restarting result h1_0105.0_S5R1__218_S5R1a_0 using einstein_S5R1 version 402
7/12/06 12:05:08 PM||request_reschedule_cpus: project op
7/12/06 12:05:12 PM|SETI@home|Sending scheduler request to http://setiboinc.ssl.berkeley.edu/sah_cgi/cgi
7/12/06 12:05:12 PM|SETI@home|Reason: Requested by user
7/12/06 12:05:12 PM|SETI@home|Note: not requesting new work or reporting results
7/12/06 12:05:17 PM|SETI@home|Scheduler request to http://setiboinc.ssl.berkeley.edu/sah_cgi/cgi succeeded
7/12/06 12:05:54 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 12:05:54 PM|LHC@home|Reason: To fetch work
7/12/06 12:05:54 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 12:05:59 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 12:05:59 PM|LHC@home|No work from project
7/12/06 12:07:02 PM|LHC@home|Fetching master file
7/12/06 12:07:07 PM|LHC@home|Master file download succeeded
7/12/06 12:07:13 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 12:07:13 PM|LHC@home|Reason: To fetch work
7/12/06 12:07:13 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 12:07:18 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 12:07:18 PM|LHC@home|No work from project
7/12/06 12:08:23 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 12:08:23 PM|LHC@home|Reason: To fetch work
7/12/06 12:08:23 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 12:08:39 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 12:08:39 PM|LHC@home|No work from project
7/12/06 12:09:44 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 12:09:44 PM|LHC@home|Reason: To fetch work
7/12/06 12:09:44 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 12:09:55 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 12:09:55 PM|LHC@home|No work from project
7/12/06 12:10:58 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 12:10:58 PM|LHC@home|Reason: To fetch work
7/12/06 12:10:58 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 12:11:03 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 12:11:03 PM|LHC@home|No work from project
7/12/06 12:12:07 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 12:12:07 PM|LHC@home|Reason: To fetch work
7/12/06 12:12:07 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 12:12:12 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 12:12:12 PM|LHC@home|No work from project
7/12/06 12:13:52 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 12:13:52 PM|LHC@home|Reason: To fetch work
7/12/06 12:13:52 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 12:13:57 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 12:13:57 PM|LHC@home|No work from project
7/12/06 12:20:13 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 12:20:13 PM|LHC@home|Reason: To fetch work
7/12/06 12:20:13 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 12:20:18 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 12:20:18 PM|LHC@home|No work from project
7/12/06 12:30:36 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 12:30:36 PM|LHC@home|Reason: To fetch work
7/12/06 12:30:36 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 12:30:41 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 12:30:41 PM|LHC@home|No work from project
7/12/06 12:47:55 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 12:47:55 PM|LHC@home|Reason: To fetch work
7/12/06 12:47:55 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 12:48:00 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 12:48:00 PM|LHC@home|No work from project
7/12/06 1:26:47 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 1:26:47 PM|LHC@home|Reason: To fetch work
7/12/06 1:26:47 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 1:26:53 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 1:26:53 PM|LHC@home|No work from project
7/12/06 4:57:01 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 4:57:01 PM|LHC@home|Reason: To fetch work
7/12/06 4:57:01 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 4:57:06 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 4:57:06 PM|LHC@home|No work from project
7/12/06 4:58:10 PM|LHC@home|Fetching master file
7/12/06 4:58:15 PM|LHC@home|Master file download succeeded
7/12/06 4:58:21 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 4:58:21 PM|LHC@home|Reason: To fetch work
7/12/06 4:58:21 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 4:58:26 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 4:58:26 PM|LHC@home|No work from project
7/12/06 4:59:30 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 4:59:30 PM|LHC@home|Reason: To fetch work
7/12/06 4:59:30 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 4:59:36 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 4:59:36 PM|LHC@home|No work from project
7/12/06 5:00:39 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 5:00:39 PM|LHC@home|Reason: To fetch work
7/12/06 5:00:39 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 5:00:44 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 5:00:44 PM|LHC@home|No work from project
7/12/06 5:01:49 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 5:01:49 PM|LHC@home|Reason: To fetch work
7/12/06 5:01:49 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 5:01:54 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 5:01:54 PM|LHC@home|No work from project
7/12/06 5:02:58 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 5:02:58 PM|LHC@home|Reason: To fetch work
7/12/06 5:02:58 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 5:03:03 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 5:03:03 PM|LHC@home|No work from project
7/12/06 5:05:23 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 5:05:23 PM|LHC@home|Reason: To fetch work
7/12/06 5:05:23 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 5:05:28 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 5:05:28 PM|LHC@home|No work from project
7/12/06 5:11:22 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 5:11:22 PM|LHC@home|Reason: To fetch work
7/12/06 5:11:22 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 5:11:27 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 5:11:27 PM|LHC@home|No work from project
7/12/06 5:27:26 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 5:27:26 PM|LHC@home|Reason: To fetch work
7/12/06 5:27:26 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 5:27:31 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 5:27:31 PM|LHC@home|No work from project
7/12/06 6:14:36 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 6:14:36 PM|LHC@home|Reason: To fetch work
7/12/06 6:14:36 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 6:14:41 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 6:14:41 PM|LHC@home|No work from project
7/12/06 7:02:23 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 7:02:23 PM|LHC@home|Reason: To fetch work
7/12/06 7:02:23 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 7:02:28 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 7:02:28 PM|LHC@home|No work from project
7/12/06 7:09:56 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 7:09:56 PM|LHC@home|Reason: To fetch work
7/12/06 7:09:56 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 7:10:02 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 7:10:02 PM|LHC@home|No work from project
7/12/06 7:11:06 PM|LHC@home|Fetching master file
7/12/06 7:11:11 PM|LHC@home|Master file download succeeded
7/12/06 7:11:17 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 7:11:17 PM|LHC@home|Reason: To fetch work
7/12/06 7:11:17 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 7:11:27 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 7:11:27 PM|LHC@home|No work from project
7/12/06 7:12:31 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 7:12:31 PM|LHC@home|Reason: To fetch work
7/12/06 7:12:31 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 7:13:24 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 7:13:24 PM|LHC@home|No work from project
7/12/06 7:14:28 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 7:14:28 PM|LHC@home|Reason: To fetch work
7/12/06 7:14:28 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 7:14:33 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 7:14:33 PM|LHC@home|No work from project
7/12/06 7:15:37 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 7:15:37 PM|LHC@home|Reason: To fetch work
7/12/06 7:15:37 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 7:15:42 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 7:15:42 PM|LHC@home|No work from project
7/12/06 7:16:46 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 7:16:46 PM|LHC@home|Reason: To fetch work
7/12/06 7:16:46 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 7:16:51 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 7:16:51 PM|LHC@home|No work from project
7/12/06 7:19:13 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 7:19:13 PM|LHC@home|Reason: To fetch work
7/12/06 7:19:13 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 7:19:18 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 7:19:18 PM|LHC@home|No work from project
7/12/06 7:22:49 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 7:22:49 PM|LHC@home|Reason: To fetch work
7/12/06 7:22:49 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 7:22:55 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 7:22:55 PM|LHC@home|No work from project
7/12/06 7:35:47 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 7:35:47 PM|LHC@home|Reason: To fetch work
7/12/06 7:35:47 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 7:35:53 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 7:35:53 PM|LHC@home|No work from project
7/12/06 8:25:34 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 8:25:34 PM|LHC@home|Reason: To fetch work
7/12/06 8:25:34 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 8:25:45 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 8:25:45 PM|LHC@home|No work from project
7/12/06 8:49:00 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 8:49:00 PM|LHC@home|Reason: To fetch work
7/12/06 8:49:00 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 8:49:26 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 8:49:26 PM|LHC@home|No work from project
7/12/06 9:51:02 PM||Suspending computation and network activity - running CPU benchmarks
7/12/06 9:51:02 PM|Einstein@Home|Pausing result h1_0105.0_S5R1__218_S5R1a_0 (left in memory)
7/12/06 9:51:04 PM||Running CPU benchmarks
7/12/06 9:52:03 PM||Benchmark results:
7/12/06 9:52:03 PM|| Number of CPUs: 1
7/12/06 9:52:03 PM|| 294 double precision MIPS (Whetstone) per CPU
7/12/06 9:52:03 PM|| 465 integer MIPS (Dhrystone) per CPU
7/12/06 9:52:03 PM||Finished CPU benchmarks
7/12/06 9:52:04 PM||Resuming computation and network activity
7/12/06 9:52:04 PM||request_reschedule_cpus: Resuming activities
7/12/06 9:52:04 PM|Einstein@Home|Resuming result h1_0105.0_S5R1__218_S5R1a_0 using einstein_S5R1 version 402
7/12/06 10:39:16 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 10:39:16 PM|LHC@home|Reason: To fetch work
7/12/06 10:39:16 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 10:39:43 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 10:39:43 PM|LHC@home|No work from project
7/12/06 10:40:46 PM|LHC@home|Fetching master file
7/12/06 10:40:51 PM|LHC@home|Master file download succeeded
7/12/06 10:40:57 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 10:40:57 PM|LHC@home|Reason: To fetch work
7/12/06 10:40:57 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 10:41:49 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 10:41:49 PM|LHC@home|No work from project
7/12/06 10:42:52 PM|LHC@home|Sending scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi
7/12/06 10:42:52 PM|LHC@home|Reason: To fetch work
7/12/06 10:42:52 PM|LHC@home|Requesting 864000 seconds of new work
7/12/06 10:42:57 PM|LHC@home|Scheduler request to http://lhcathome-sched1.cern.ch/scheduler/cgi succeeded
7/12/06 10:42:57 PM|LHC@home|No work from project

ID: 14297 · Report as offensive     Reply Quote
Dotsch
Avatar

Send message
Joined: 7 Aug 05
Posts: 60
Credit: 59,179
RAC: 7
Message 14301 - Posted: 13 Jul 2006, 4:21:55 UTC - in response to Message 14293.  


Option 2 :
Install a premium service for 5$ per month that reads the "WorkUnits available" Counter from the Website every hour and immediately EMail/Message/Phone all customers when the Counter is like >1000 or so *g*

What is about to start a thread, in which the first person which see new work, posts a notice if new work is available ? - If you subscribe to the thread, you will get an email, if anyone posts into this thread.

ID: 14301 · Report as offensive     Reply Quote
Profile dr_mabuse
Avatar

Send message
Joined: 30 Dec 05
Posts: 57
Credit: 819,592
RAC: 17
Message 14308 - Posted: 13 Jul 2006, 6:50:35 UTC - in response to Message 14301.  


Option 2 :
Install a premium service for 5$ per month that reads the "WorkUnits available" Counter from the Website every hour and immediately EMail/Message/Phone all customers when the Counter is like >1000 or so *g*

What is about to start a thread, in which the first person which see new work, posts a notice if new work is available ? - If you subscribe to the thread, you will get an email, if anyone posts into this thread.

Hi folks,
one update every hour seems not fast enough. I have noticed that the work units were sold out in less than 15 minutes.

Nice to have other projects like climateprediction or World Community Grid where my contribution is needed.
YS
Jochen from Old Germany
ID: 14308 · Report as offensive     Reply Quote
ralic

Send message
Joined: 2 Sep 04
Posts: 28
Credit: 44,344
RAC: 0
Message 14318 - Posted: 13 Jul 2006, 15:22:25 UTC
Last modified: 13 Jul 2006, 15:25:00 UTC

Lets all watch this video Seed: Seed Short Film: Lords of the Ring An exclusive tour of the underground accelerator at CERN led by the scientists who work there. and see if we can find out where the wu's are disappearing to so quickly....
ID: 14318 · Report as offensive     Reply Quote
Adam23

Send message
Joined: 30 May 06
Posts: 40
Credit: 216,313
RAC: 30
Message 14321 - Posted: 13 Jul 2006, 17:02:29 UTC - in response to Message 14318.  

Lets all watch this video Seed: Seed Short Film: Lords of the Ring An exclusive tour of the underground accelerator at CERN led by the scientists who work there. and see if we can find out where the wu's are disappearing to so quickly....


Just great!
ID: 14321 · Report as offensive     Reply Quote
Previous · 1 . . . 6 · 7 · 8 · 9 · 10 · 11 · Next

Message boards : Number crunching : I think we should restrict work units


©2022 CERN