1) Message boards : Number crunching : Download problems? (Message 19849)
Posted 29 Jul 2008 by genes
I also have two WU's stuck at 0.47/266KB. Same http error.

7/29/2008 2:09:36 PM|lhcathome|Temporarily failed download of w2_lhc273_14__46__s__64.28_59.31__16_18__5__52.5_1_sixvf_boinc149062.zip: http error

2) Message boards : Number crunching : Statistics not updated (Message 16032)
Posted 4 Jan 2007 by genes
Do you mean Paul Buck's BOINC Wiki? It has moved to here:


It has some LHC-specific info, but the "Official FAQ" link is broken also (it points to the same one). Hope that helps a little.
3) Message boards : Number crunching : Latest information on the move to UK (Message 15874)
Posted 23 Dec 2006 by genes
Thanks for all the info! We will be here when you are ready.

In the meantime, we all wish you the best in this holiday season, and a happy and peaceful new year.

4) Message boards : Number crunching : Did everyone get work 02 Nov UTC? (Message 15329)
Posted 3 Nov 2006 by genes
I got from 1 to 3 WU's on every machine! All have 0.1 day cache, and all work on a bunch of projects. Happy, happy, joy, joy!!!
5) Message boards : Number crunching : Fairer distribuiton of work(Flame Fest 2007) (Message 15179)
Posted 25 Oct 2006 by genes
I got some! I got some! I got 3 on one of my machines. It just happened to be asking at the right time. I only keep a 0.1 day cache. It helps to have several machines asking, then the chances of one of them asking when the work is actually there increases.
6) Message boards : Number crunching : Graphical interfase (Message 15144)
Posted 20 Oct 2006 by genes
This project actually does have rather nice-looking graphics. The graphics, though, do not really represent the actual data being crunched (it's just a simulation), although there is a progress bar.
7) Message boards : Number crunching : Why are you a member of LHC@home? (Message 14982)
Posted 5 Oct 2006 by genes
Hi Gang,

I used to crunch a lot for LHC because I thought the project worthwhile, but I gave it a rest for a while. I check up on it every so often, and when I saw all this new energy being put into it (new servers, new application), I decided it was time to reattach a couple of machines and see how it goes. Plus, I like the graphics.

...waiting for work...

8) Message boards : Number crunching : Screensaver lockups (yes, that again) (Message 11322)
Posted 11 Nov 2005 by genes
All of my machines are running XP SP2, and all that run LHC have this problem (and always have). The machine I'm using at this very moment, I just found crashed (I had to cycle the power) with a frozen LHC screensaver on the monitor. When Windows came back up, the Boinc manager had two LHC WU's running (yes, it's a dual processor machine). My machines range from dual P3 1GHz to Pentium D 3.0 GHz (dual core). I also have a couple of mid-range P4's without HT. All have problems.

So, in my experience, this bug has never gone away, only the LHC WU's have come and gone, and whenever they're coming, this bug comes with them.

BTW, all machines have some sort of nVidia card, and various versions of nVidia drivers, none which I would call particularly buggy. I've tried ATI cards, and they don't work with many of the Boinc projects. What's left to try?
9) Message boards : Number crunching : LHC Wish List (Message 8407)
Posted 12 Jul 2005 by genes
#7 - (I think one of the #5's should be #6) - I seem to remember reading somewhere that the graphics were just showing a canned simulation that had nothing to do with the WU data being crunched. If this is indeed true, would it be possible to make the graphics relate in some way to the data? How about making the number of particles in the graphics track the real number being crunched?

10) Message boards : Number crunching : New work?? Sort of...... (Message 8147)
Posted 20 Jun 2005 by genes
I got 4 of them. Estimated time 1:05 (that's 1 minute, 5 seconds!) on a P4, 1.8GHz, non-HT. They haven't started running yet, so I can sit and look at them in the queue...
11) Message boards : Number crunching : shorter deadline please! (Message 7932)
Posted 3 Jun 2005 by genes
@jrenkar - this is a known bug with duals and HT's. Hopefully they will fix it in the next version. What I've done is tweaked my resource share and edited the xml to get two CPDN's on each dual or HT box. That way, you never run out of work. (Well, maybe some day...) ;)

@GG - yes, you can do decimals without foincing it up! I ran with 0.5 for a long time.

Happy crunching!
12) Message boards : Number crunching : LHC Project Demise ... ??? (Message 7102)
Posted 21 Apr 2005 by genes
> As this project is a major interest area for me, well, they will have to shoot me to get me to leave. (Paul)

I second that. My machines are sitting here with the 4.67 exe, waiting for new work. Of course, they are busy with other BOINC projects while they wait, as is the intent of BOINC. So they will crunch the work when it becomes available, and will not run out of patience. They are computers, after all...


> The problem is that boinc does not bind a WorkUnit to a specific version of the executable. (Chrulle)

??? Is this really true? I'm (almost 100%) positive that on other BOINC projects that each WU specifies which EXE it needs to run. Isn't that what the "application" column is showing?

13) Message boards : Number crunching : New WUs (Message 6890)
Posted 9 Apr 2005 by genes
Two of my machines got both a 4.64 and a 4.66. One just got a 4.64. All are Windows boxes. Go figure. So far, no problems with any of them.
14) Message boards : Number crunching : The 'Zero CPU' problem ... !!! (Message 6404)
Posted 5 Mar 2005 by genes
While credits are good for the team, the real reason I leave my machines on 24/7 to crunch for these projects is the science. And, of course, the pretty screensavers... ;)

15) Message boards : Number crunching : Already have result... error (Message 6297)
Posted 3 Mar 2005 by genes
I had this problem a while back, and I thought I was the only one. Someone is also having it on Predictor.

old thread:

Predictor thread:

The cure (for me, anyway) was to detach/reattach.
16) Message boards : Number crunching : Results page (Message 6152)
Posted 27 Feb 2005 by genes
Ditto on those colors...

but of course you can always see the deadlines on the CC. not that they should be a problem on this project, since we have 2 weeks unlike Einstein.
17) Message boards : Number crunching : Already have result ... ? (Message 6081)
Posted 25 Feb 2005 by genes
Reset didn't fix it, only detaching/reattaching.
18) Message boards : Number crunching : Already have result ... ? (Message 6014)
Posted 24 Feb 2005 by genes
As an experiment, I left one of my machines to continue attempting to download work without detaching/reattaching, and it is now getting work, or at least has gotten work. It has 5 WU's that errored out like this:

2005-02-23 09:08:51 [LHC@home] Unrecoverable error for result v64lhc95-38s10_12530_1_sixvf_2258_3 (CreateProcess() failed - The process cannot access the file because it is being used by another process. (0x20))
2005-02-23 09:08:51 [LHC@home] CreateProcess() failed - The process cannot access the file because it is being used by another process. (0x20)
2005-02-23 09:08:52 [LHC@home] Deferring communication with project for 59 seconds

then, over and over, I get these messages:

2005-02-23 09:15:59 [LHC@home] Already have result v64lhc95-38s10_12530_1_sixvf_2258_3

On the work tab of the CC, the status is listed as "computation error". I only showed one of them, but there are five. They do not upload, but it looks like the CC is trying to download them again.

Using CC 4.19.
Host ID: 7543
WU's in question:

I could probably fix this by detaching/reattaching, but I'll give it a little more time. Reset, maybe?
19) Questions and Answers : Windows : CreateProcess() failed on Dual or HT machines (Message 5893)
Posted 22 Feb 2005 by genes
OK, there goes that theory. I've had a non-HT P4 fail with the same error.
20) Message boards : Number crunching : Detached, Reattached, got work, it's all failing! (Message 5892)
Posted 22 Feb 2005 by genes
OK, there goes that theory. I've had a non-HT P4 fail with the same error.

Next 20

©2024 CERN