Message boards : Number crunching : Can't Access Work Units
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · Next

AuthorMessage
Klimax

Send message
Joined: 27 Apr 07
Posts: 18
Credit: 9,970
RAC: 0
Message 17447 - Posted: 20 Jul 2007, 15:42:39 UTC

I hope that such large batch will be soon,becasue I managed to screw my prefs so badly that PC thought it is overcommited,when WU's were available... :-(
ID: 17447 · Report as offensive     Reply Quote
Profile [B^S] BOINC-SG
Avatar

Send message
Joined: 3 Jan 07
Posts: 6
Credit: 3,068
RAC: 0
Message 17448 - Posted: 20 Jul 2007, 15:59:48 UTC

I got one! Yeah!
BOINC Synergy!


My NEW BOINC-Site!
ID: 17448 · Report as offensive     Reply Quote
Profile dr_mabuse
Avatar

Send message
Joined: 30 Dec 05
Posts: 57
Credit: 835,284
RAC: 0
Message 17450 - Posted: 20 Jul 2007, 16:01:57 UTC - in response to Message 17445.  

Got 6 WU. The first one crashed when I switched on the graphics and clicked to full sreen mode. This is a known problem but it seems to be continued.

Happy crunching !
J. from Germany
ID: 17450 · Report as offensive     Reply Quote
Profile caspr
Avatar

Send message
Joined: 26 Apr 06
Posts: 89
Credit: 309,235
RAC: 0
Message 17453 - Posted: 20 Jul 2007, 17:11:44 UTC

WOW,I never thought there would be such a "feeding frenzy" for test wu's! I only put 1 on each box to make sure all was well! THE MASSES ARE HURNGRY!!!
:O)
A clear conscience is usually the sign of a bad memory


ID: 17453 · Report as offensive     Reply Quote
Profile DarkWaterSong

Send message
Joined: 5 Aug 05
Posts: 9
Credit: 4,201,701
RAC: 497
Message 17454 - Posted: 20 Jul 2007, 17:42:53 UTC

Happy crunching 4 WUs from LHC!

...as well as one from ClimatePredict, Seti and Einstein + ogles from PrimeGrid.
ID: 17454 · Report as offensive     Reply Quote
PovAddict
Avatar

Send message
Joined: 14 Jul 05
Posts: 275
Credit: 49,291
RAC: 0
Message 17471 - Posted: 21 Jul 2007, 0:29:07 UTC - in response to Message 17387.  

Maybe someone should figure out how to better allocate work units.

Why should someone with the computing horsepower be able to capture all the available work units just because they are constantly online with their systems?


We are looking into this.

The thing is it is not just being online all the time (I'm on 24/7 and only got one WU to crunch on the last run). Some people who use BOINC change their settings/tweak what information they submit to hog the WUs, this could be seen as cheating or as creativity, depends on your viewpoint.

I personally would prefer to see each user get one WU per host per run and if there is work left to have it shared out the same way.

This is not just an LHC@home issue, it is more apparent here because of our limited WUs though.

BOINC already has a way to limit how many workunits a host can have at a time - but I'm guessing you still have an old (ancient, actually) server version.
ID: 17471 · Report as offensive     Reply Quote
PovAddict
Avatar

Send message
Joined: 14 Jul 05
Posts: 275
Credit: 49,291
RAC: 0
Message 17472 - Posted: 21 Jul 2007, 0:30:43 UTC - in response to Message 17410.  

something seems to be turning off the upload/download server will get this fixed ASAP.

FYI, if you download the very latest server code and create a new project, upload/download always shows disabled on the server_status page, even if it's enabled. I'm guessing your two-year-old BOINC server does the same too ;)
ID: 17472 · Report as offensive     Reply Quote
anonynous2_

Send message
Joined: 15 Jun 07
Posts: 2
Credit: 15,232
RAC: 0
Message 17479 - Posted: 21 Jul 2007, 4:58:56 UTC

11 WUs between 4 boxes. Cheers! =)
ID: 17479 · Report as offensive     Reply Quote
Profile Bellator

Send message
Joined: 23 Jul 05
Posts: 5
Credit: 50,011
RAC: 0
Message 17484 - Posted: 21 Jul 2007, 10:48:42 UTC - in response to Message 17472.  
Last modified: 21 Jul 2007, 11:36:23 UTC


I was one of the lucky ones - got 3 wu. One is finished but is still sitting in Transfers, upload appears stalled. Next one is almost finished. If it gets stalled as well, should I try the third one?
Incidentally, I am having the same problem with SETI, but all other projects run fine.

ID: 17484 · Report as offensive     Reply Quote
Profile Bellator

Send message
Joined: 23 Jul 05
Posts: 5
Credit: 50,011
RAC: 0
Message 17485 - Posted: 21 Jul 2007, 11:39:47 UTC - in response to Message 17484.  


I was one of the lucky ones - got 3 wu. One is finished but is still sitting in Transfers, upload appears stalled. Next one is almost finished. If it gets stalled as well, should I try the third one?
Incidentally, I am having the same problem with SETI, but all other projects run fine.


A little later - the second one is not uploading either. Does anyone have an explanation?
21/07/2007 1:23:18 PM|lhcathome|[file_xfer] Started upload of file wbtest2_btest2__1__64.302_59.312__6_8__6__70_1_sixvf_boinc326157_3_0
21/07/2007 1:28:20 PM||Project communication failed: attempting access to reference site
21/07/2007 1:28:20 PM|lhcathome|[file_xfer] Temporarily failed upload of wbtest2_btest2__1__64.302_59.312__6_8__6__70_1_sixvf_boinc326157_3_0: http error
21/07/2007 1:28:20 PM|lhcathome|Backing off 1 min 0 sec on upload of file wbtest2_btest2__1__64.302_59.312__6_8__6__70_1_sixvf_boinc326157_3_0
21/07/2007 1:28:21 PM||Access to reference site succeeded - project servers may be temporarily down.
21/07/2007 1:29:20 PM|lhcathome|[file_xfer] Started upload of file wbtest2_btest2__1__64.302_59.312__6_8__6__70_1_sixvf_boinc326157_3_0
21/07/2007 1:34:22 PM||Project communication failed: attempting access to reference site
21/07/2007 1:34:22 PM|lhcathome|[file_xfer] Temporarily failed upload of wbtest2_btest2__1__64.302_59.312__6_8__6__70_1_sixvf_boinc326157_3_0: http error
21/07/2007 1:34:22 PM|lhcathome|Backing off 1 min 0 sec on upload of file wbtest2_btest2__1__64.302_59.312__6_8__6__70_1_sixvf_boinc326157_3_0
21/07/2007 1:34:24 PM||Access to reference site succeeded - project servers may be temporarily down.
21/07/2007 1:35:23 PM|lhcathome|[file_xfer] Started upload of file wbtest2_btest2__1__64.302_59.312__6_8__6__70_1_sixvf_boinc326157_3_0

ID: 17485 · Report as offensive     Reply Quote
KAMasud

Send message
Joined: 7 Oct 06
Posts: 114
Credit: 23,192
RAC: 0
Message 17493 - Posted: 22 Jul 2007, 6:52:23 UTC


Servers may be down, it happens more frequently at Seti. Dont worry.
Regards
Masud.
ID: 17493 · Report as offensive     Reply Quote
Grenadier
Avatar

Send message
Joined: 2 Sep 04
Posts: 39
Credit: 441,128
RAC: 0
Message 17494 - Posted: 22 Jul 2007, 6:57:19 UTC

SETI posted this earlier this week:

2007-07-19: SETI@Home
There will be a lab-wide power outage (for repairs in a nearby building) this weekend. We'll be shutting down BOINC/SETI@home services around 16:00 PDT on Saturday, and coming back on line 08:00 PDT on Sunday. All web/data servers will be unreachable during this time.
ID: 17494 · Report as offensive     Reply Quote
EclipseHA

Send message
Joined: 18 Sep 04
Posts: 47
Credit: 1,886,234
RAC: 0
Message 17503 - Posted: 23 Jul 2007, 2:30:52 UTC

Seems the server shut itself down again...

Right now, the stas on the main page show 1899 WUs available, but I get "now work available" when trying to get some....
ID: 17503 · Report as offensive     Reply Quote
bass4lhc

Send message
Joined: 28 Sep 04
Posts: 43
Credit: 249,962
RAC: 0
Message 17504 - Posted: 23 Jul 2007, 3:21:39 UTC

same here, again.....

there is something dripping down from my computer, it looks like "fun".

ID: 17504 · Report as offensive     Reply Quote
Andreas

Send message
Joined: 2 Aug 05
Posts: 33
Credit: 2,329,729
RAC: 0
Message 17505 - Posted: 23 Jul 2007, 3:33:04 UTC - in response to Message 17434.  

OK just to add we have it sorted that we can get this fixed quicker if it happens again but it shouldn't as we've solved the problem.


When, exactly, did you solve the problem?
ID: 17505 · Report as offensive     Reply Quote
EclipseHA

Send message
Joined: 18 Sep 04
Posts: 47
Credit: 1,886,234
RAC: 0
Message 17506 - Posted: 23 Jul 2007, 3:38:55 UTC - in response to Message 17505.  
Last modified: 23 Jul 2007, 3:49:33 UTC

OK just to add we have it sorted that we can get this fixed quicker if it happens again but it shouldn't as we've solved the problem.


When, exactly, did you solve the problem?



I think it's safe to say they "thought" they solved the problem! :)

That's just an example why this test run could do some good!

It is kind of interesting that ~10% of the work in the test batch is now queued for retransmission after only a couple days - seems like a high error rate to me!

(another example why the test run could do some good!)
ID: 17506 · Report as offensive     Reply Quote
Profile Magic Quantum Mechanic
Avatar

Send message
Joined: 24 Oct 04
Posts: 1169
Credit: 54,074,888
RAC: 51,458
Message 17508 - Posted: 23 Jul 2007, 6:23:52 UTC
Last modified: 23 Jul 2007, 6:26:57 UTC


Server Status


Up, 1,859 workunits to crunch
3,615 workunits in progress
10 concurrent connections


And still nothing for me again.....

Right now I am getting the same as you got but I checked what was going on earlier today (I was off playing golf) and I got a bunch of "checksum or signature errors"




Volunteer Mad Scientist For Life
ID: 17508 · Report as offensive     Reply Quote
5th.rider

Send message
Joined: 13 Jul 05
Posts: 16
Credit: 45,621
RAC: 0
Message 17509 - Posted: 23 Jul 2007, 7:10:38 UTC

1860 WU to crunch - and I am getting "no work available"
no checksum errors or anything else looking like an error. had some wu two days ago, but now?
best regards
5th rider
ID: 17509 · Report as offensive     Reply Quote
Profile [AF>Futura Sciences>Linux] Thr...

Send message
Joined: 6 Mar 07
Posts: 8
Credit: 31,454
RAC: 0
Message 17510 - Posted: 23 Jul 2007, 7:45:07 UTC

Either.


Up, 1861 workunits to crunch
3467 workunits in progress
10 concurrent connections

But


23/07/2007 09:44:12|lhcathome|Fetching scheduler list
23/07/2007 09:44:17|lhcathome|Master file download succeeded
23/07/2007 09:44:22|lhcathome|Sending scheduler request: Requested by user
23/07/2007 09:44:22|lhcathome|Requesting 8640 seconds of new work
23/07/2007 09:44:27|lhcathome|Scheduler RPC succeeded [server version 505]
23/07/2007 09:44:27|lhcathome|Deferring communication for 7 sec
23/07/2007 09:44:27|lhcathome|Reason: requested by project
23/07/2007 09:44:27|lhcathome|Deferring communication for 1 min 0 sec
23/07/2007 09:44:27|lhcathome|Reason: no work from project


What's the problem ?
------
Thrr-Gilag Kee'rr

L'Alliance Francophone
ID: 17510 · Report as offensive     Reply Quote
Profile Neasan
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 30 Nov 06
Posts: 234
Credit: 11,078
RAC: 0
Message 17512 - Posted: 23 Jul 2007, 8:17:11 UTC - in response to Message 17510.  

OK it's fixed now, I was just not working over the weekend.
ID: 17512 · Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · 6 · Next

Message boards : Number crunching : Can't Access Work Units


©2024 CERN