Message boards :
Number crunching :
Last wu crunched - kind of a sad feeling
Message board moderation
Author | Message |
---|---|
Send message Joined: 1 Sep 04 Posts: 55 Credit: 21,297 RAC: 11 |
No more wus in my queue, but I am eagerly waiting for the comeback. Should we stay connected (client trying to connect), or should we detach and reattach as soon as you will be online again? finally I think that the whole team has done a tremendous job during the last weeks. Well done! In a footnote, I want to aid my wish for a more representative screensaver and a server- and project-status page. |
Send message Joined: 2 Sep 04 Posts: 23 Credit: 9,276 RAC: 0 |
Is this now officially the end then? I know things will lift off again but if there are definetly no more LHC units? I'll move my computers to another project if thats the case.......Oh bu$$er that seems to mean SETI at the moment, guess I'll have to be hypocritical and go crunch some of their units for a while. Angs |
Send message Joined: 2 Sep 04 Posts: 352 Credit: 1,393,150 RAC: 0 |
It looks like it Angstrom, I haven't been able to download any new WU's for 3 or 4 days now so they are probably winding down and allowing everybody to empty their Caches. I still have work to last a couple of days yet on a few computers but have run out on some and attached them back to Seti for now ... |
Send message Joined: 17 Sep 04 Posts: 190 Credit: 649,637 RAC: 0 |
some of the clients could dl work this morning. Some tunescan are with ;-)) the Report Deadline is set to November 21. 2004. This means nothing. I believe/hope, at least til this date. Or if more work is ready to dl, behind or around. Interesting: many _4, _5, _7, _5 Endings. highest seen _9 are with. Could this be the "never returned" work, reput/regenerated to the dl dir to complete the work cycle(s)? one client got 19 WUs. But only one. Oviously the final stage before hiphernation is loaded. |
Send message Joined: 2 Sep 04 Posts: 209 Credit: 1,482,496 RAC: 0 |
|
Send message Joined: 25 Oct 04 Posts: 27 Credit: 1,205 RAC: 0 |
Same here ,, out of work , I�ve switched back to seti as the 4.07 seems to have fixed the slowdown problems there when lhc comes back on line in dec / jan I will revert back to 100% lhc I did find something of a bug in the process If you set lhc at 200 and seti at 100 and to use 3 cpu�s I found that if there�s no work from lhc then it will only download enough from seti to keep 2 of them happy and leave 1 cpu idle ? Thought if there�s no work from one project it would compensate and use the available capacity on another project Dave |
Send message Joined: 27 Sep 04 Posts: 282 Credit: 1,415,417 RAC: 0 |
finished my last WU this morning... have 900+ credits pending.... :-) |
Send message Joined: 18 Sep 04 Posts: 71 Credit: 28,399 RAC: 0 |
> Same here ,, out of work , I�ve switched back to seti as the 4.07 seems to > have fixed the slowdown problems there when lhc comes back on line in dec / You switched back in order to get fewer credits for crunching? That's a wee bit odd. :-D |
Send message Joined: 25 Oct 04 Posts: 27 Credit: 1,205 RAC: 0 |
Fewer credits per work unit , less time per work unit but more work units processed The end result is same credits for same processing time but more work is done , better for the science that way so no real loss to credits :) Dave |
Send message Joined: 24 Oct 04 Posts: 6 Credit: 67,366 RAC: 0 |
You might find some WU's trickle out as deadlines expire and an additional WU needs to be resent. While my one computer has been out of work for a few days... Another one of my computers received 3 new units November 7th. |
Send message Joined: 2 Sep 04 Posts: 352 Credit: 1,393,150 RAC: 0 |
> You might find some WU's trickle out as deadlines expire and an additional WU > needs to be resent. While my one computer has been out of work for a few > days... Another one of my computers received 3 new units November 7th. ========== Yes, I recieved 1 lone tunescana on 1 computer last night & 22 Regular WU's & 2 tunescana on another 1 also last night. It was kinda weird really, I just happened to try & connect to the Web Page & I did so I Updated that Computer & bam it downloaded 24 WU's. But as soon as I did that it said the Project was down again & I couldn't connect to the Web Page again. I just hit it lucky I guess when I tried. I'm still crunching LHC WU's on 4 PC's but 1 of them will be out in a few hr's and 2 will be out by the end of the day & the other 1 out sometime tomorrow unless I can get more downloaded to it. It's ok though, it's getting to be a real hassle trying to keep LHC work on them so I'm going to re-connect to Seti & crunch over there until LHC comes back up once they shut down. I'll just leave them set to 50/50 & if any trickle in they'll get crunched that way ... :) |
Send message Joined: 18 Sep 04 Posts: 71 Credit: 28,399 RAC: 0 |
> It's ok though, it's getting to be a real hassle trying to keep LHC work on > them so I'm going to re-connect to Seti & crunch over there until LHC > comes back up once they shut down. I'll just leave them set to 50/50 & if > any trickle in they'll get crunched that way ... :) That's how I've always done it. I figure that the core client knows what it's doing with regard to managing its connections to the various projects. Forcing connections all the time does NOT do the project servers any favours, especially with servers as fragile as the ones currently hosting LHC@home. |
©2024 CERN