1) Message boards : Number crunching : Are exported stats getting updated? (Message 19456)
Posted 16 Apr 2008 by Rob Lilley
Stats are exporting to Boincstats again - I checked my stats this morning, and saw that credits had come in for yesterday.

FWIW, I've been crunching for LHC since 2005 - on and off, obviously ;) , and have been attached to Boincstats for nearly as long, and my credits have always been granted and stats exported eventually.
2) Message boards : Number crunching : shut down for maintenance ? (Message 19412)
Posted 14 Apr 2008 by Rob Lilley
Well, we can HOPE that they cut some slack...

I hope so too, as I have a finished WU that was due yesterday

Problem is, it's already been reported by three machines, so I guess they don't need it, and so I would only get the credits if the overworked, underpaid, under appreciated (by everyone but me) project admins were feeling very kind...
3) Message boards : LHC@home Science : Just want to be sure this project is sending out tasks. (Message 18756)
Posted 21 Dec 2007 by Rob Lilley
Thanks, Dr.Mabuse.

An interesting source.

I misssed the flow of work in October, having given up on this project about two years ago. I understand that the plans are to have many more tasks available for processing in 2008, but there is little to confirm this.

Well, I've just checked back and found I've been getting LHC work units about once a week in December, and I don't crunch 24/7 by any means. That's a lot more than when the project really rarely had work.

Just attach to one or two more projects, and you'll always have work (unless the other projects are SIMAP, Predictor or Pirates@home).

4) Message boards : Number crunching : Can't Access Work Units (Message 17513)
Posted 23 Jul 2007 by Rob Lilley

1861 workunits to crunch

Eeek - gone in 5 minutes!
5) Message boards : Number crunching : New computer database entry created on each connect (Message 15566)
Posted 19 Nov 2006 by Rob Lilley
In a world where participants all read these boards, the project admins would simply ask each participant to use merge to tidy up their own hosts. You can do a page full at a time using the "select all" feature, so distributed over the users this is not too big a job.

However, back to the real world. Many participants don't read these boards.

How true - imho, a lot of those who post don't read these boards either, judging by the number of threads on this same subject...I know, I'm bound to get caught out myself on this now I've mentioned it :-)

Does anyone have any other ideas for how to clean up after the bug is fixed? I hope someone has a better idea than any of mine, as I don't like any of them.

It seems to me that finding the bug will be the easy part (once a professional starts looking).

I think it's may already been done on other prujects - I found this thread some time ago on the Rosetta boards referring to a patch to fix it.

One of the entries by Halifax Lad refers. The above thread also casts doubt on the theory that BAM is to blame.

As for getting people to delete the extra host ids, how about having an announcement put up on the LHC@home homepage - maybe also see about getting something put on the BOINC homepage.

Failing that, maybe if you did implement a software solution, maybe it should do something like merge all sets of apparently identical hosts where there are more than two of them - you would still have excess hosts, but the risk of getting rid of genuine hosts could be significantly less.

[edit]BTW, in order to close BOINC Manager before doing the edits mentioned in the posts above, you just need to open the BOINC Manager window and click on File then Exit - not so messy as using the Task Manager. This works for Windows anyway, don't know about any other OS.[/edit]

Live long and prosper
6) Message boards : Number crunching : Did everyone get work 02 Nov UTC? (Message 15317)
Posted 2 Nov 2006 by Rob Lilley

So, apart from those two reasons, did anyone else miss out this time?


Yeah, 'cos I need to sleep and work and I don't have my computers on 24/7 ... I'm such a lightweight!
7) Message boards : Number crunching : I think we should restrict work units (Message 14255)
Posted 7 Jul 2006 by Rob Lilley
checked the progect last night NO NEW WORK 10 W/U IN PROGRESS (roughly)

checked the project this morning NO NEW WORK 10308 W/U IN PROGRESS ?????

Same here, and I always have LHC set to allow new work - I must have blinked...
Seems as though the only way to get any LHC units is to have your machine running and connected to the Internet 24/7

What's worse is that I run three other projects and I'm about 3 hours from running completely dry.

Well, I keep my processor warm by being connected to seven projects: LHC, Rosetta, Einstein, Predictor, QMC, Seti and Xtremlab (i.e. effectively six :-) ). This gives a mixture of long and short deadlines, and BOINC suspends work fetch and uses earliest deadline first scheduling if there's ever any danger of being overcommitted. It works for me anyway, as I tend to have my machine on for most of the day, and I haven't missed a deadline since the days when I was using a 266MhZ PII running Seti@home classic only.

My advice, for what it's worth, is if you always want work, don't give up on LHC, just connect up to as many other projects in addition to LHC as it takes to keep your machine occupied. For those who care about credits, you may not get as many credits for LHC, but you will get more overall, and I find you don't get stressed if one or two projects are down or don't have work.
8) Message boards : Number crunching : New host id being created at each connect (Message 13829)
Posted 2 Jun 2006 by Rob Lilley
There is a problem with several projects where the system creates a new host id every time the user's computer connects to the project. From my reading of the thread on the Rosetta forum ( http://www.boinc.bakerlab.org/rosetta/forum_thread.php?id=1669 ), a patch can be installed to fix this.

Is it possible to install the patch on the LHC servers?

At least we can merge hosts on LHC in the meantime - Rosetta has the merge facility turned off until the Autumn :-)

©2024 CERN