Message boards : Number crunching : Please note: this project rarely has work
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · 4 . . . 7 · Next

AuthorMessage
River~~

Send message
Joined: 13 Jul 05
Posts: 456
Credit: 75,142
RAC: 0
Message 14634 - Posted: 11 Sep 2006, 6:41:45 UTC
Last modified: 11 Sep 2006, 6:53:31 UTC

This is a project that only requires occasional resources. When major new sets of runs have been planned we have been told in the past, and I expect we will be told again. Minor sets of runs are submitted as and when a question arises unexpectedly. If you stay on this project, expect to go for months at a time with no work. This is considered normal on this project, so do not expect explanations to be posted of why this happens.

This project is not suitable as a stand-alone BOINC project, by which I mean as a project that is the only BOINC project you are running at all, or the only project running on a particular machine. Too often there would be no work, and you would be better off powering your machine down. Anyone here wanting a stand-alone project would save themselves, and other users, and the project team, a lot of grief by going elsewhere. I am not being nasty by saying this -- you deserve to contribute where you will be happy, and that will never be here. We all wish you all the very best in finding a project that suits your needs.

This project is very suitable as a project to run in combination with another project. If you want to run as much LHC as possible, give LHC around 90% of the resources, and whenever work is available then LHC will be the dominant project in your machine. When LHC work is not available then the other project will run.

Be assured that when work is available, the results are needed. If you remain connected to this project you *are* doing something of value: you are donating a standby computer platform to the beam physics people so that they can come back and re-check their results at any time without having to set up a whole new project to do so.

If you do decide to stay, you need to pick another project, and the reality is that the BOINC stats will show that you will be doing a lot more work for that other project than for LHC (see my stats for example below). Two projects I'd suggest are Rosetta or CPDN.

Rosetta has a nifty scheme where you can adjust the run length of work units even after they start. Runing in parallel with LHC you'd ask for work to run for 1 day in the normal case where there is no work here. As soon as you notice LHC has work you'd cut Rosetta down to run its work for only 1 hour at a time. To do this you just visit the Rosetta website, change a setting, then ask the BOINC client to update Rosetta on all your machines. Running Rosetta workunits will then stop at the next checkpoint, and any queued Rosetta work work will not take long to get through. Don't for get to set Rosetta back to 1 day when the LHC work is all gone!

CPDN is good as its work units are so long that they will hardly notice when they are interrupted for a few days by LHC. It is a good choice if you have a modern machine and want to run "hands off".

Or, of course, any project where the science grabs your fancy is a good one to support, as you will be doing something that you personally value even when there is no work here. And of course, you can always pick more than one other project - BOINC works well with any number of projects on the go at once.

River~~
ID: 14634 · Report as offensive     Reply Quote
KWSN imcrazynow

Send message
Joined: 13 Jul 05
Posts: 6
Credit: 5,355
RAC: 0
Message 14639 - Posted: 12 Sep 2006, 0:15:13 UTC

How about at least granting the credits that have been pending(for in my case) since early July? It's not really enough to amount to anything but at least is something. I do and will continue to participate in other projects to keep my computer busy and doing usefull work however when LHC is going into a long dry period it whould be nice for the program administrators to give a heads up to the people that are helping them with the progress of their work. I find it almost impossible to believe that they are all so busy that nobody has one minute to post a message. For example, 2 months between postings to the news section which is the most logical place for important messages from the administrators. Please do not consider this post as a flame or any other derogotory message. It was intended to be constructive only.
ID: 14639 · Report as offensive     Reply Quote
Profile anarchic teapot
Avatar

Send message
Joined: 15 Feb 06
Posts: 67
Credit: 460,896
RAC: 1
Message 14646 - Posted: 15 Sep 2006, 8:35:38 UTC - in response to Message 14639.  

How about at least granting the credits that have been pending(for in my case) since early July?


Perhaps this post would have been better off in one of the threads discussing this problem. While I'm not a member of the team, I would suggest that this sort of thing is better sorted out after the project has been moved to the new server system, as hunting down bugs will be more effective there.

In any case, I contribute to LHC in the hope of aiding particle physics, rather than flaunting the size of my credits. If it's a high RAC you want, LHC@home isn't for you anyway, because there are long periods sans WU.


sQuonk
Plague of Mice
Intel Core i3-9100 CPU@3.60 GHz, but it's doing its bit just the same.
ID: 14646 · Report as offensive     Reply Quote
Profile Misfit
Avatar

Send message
Joined: 27 Aug 05
Posts: 55
Credit: 8,216
RAC: 0
Message 14648 - Posted: 16 Sep 2006, 5:03:41 UTC

You can also use BoincStudio which allows you to set a project as "backup". Your backup project(s) will only run when your main project(s) are out of work.
me@rescam.org
ID: 14648 · Report as offensive     Reply Quote
Profile Krunchin-Keith [USA]
Volunteer moderator
Project tester
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 2 Sep 04
Posts: 209
Credit: 1,482,496
RAC: 0
Message 14666 - Posted: 19 Sep 2006, 23:45:47 UTC

I second River~~'s statement.

When I joined LHC back in Sept 2004, it was beta testing then, there were only 3 other projects, Seti, Predictor and ClimatePrediction. I had done a little Seti and Predictor crunching before and decided to make LHC my main project. When work was not here I crunched other projects and finally did ClimatePrediction and others as they came along. If you look at my total credits for all projects now I have 1,400,000 with only 214,000 for LHC. Most all the other credits were accumulated when LHC was without work as LHC still ahs the highest resource share on my hosts. I know credit does not directly relate to time, but over the two years LHC only represents 7% per year of my total (approximately).
ID: 14666 · Report as offensive     Reply Quote
River~~

Send message
Joined: 13 Jul 05
Posts: 456
Credit: 75,142
RAC: 0
Message 14943 - Posted: 2 Oct 2006, 3:51:23 UTC - in response to Message 14634.  

This is a project that only requires occasional resources....


Sometime in the near future we expect Garfield to arrive - a second set of work run withon this project. Whether this will give us work 24/7 remains unclear (at least to me) but my impression is that we will still have gaps in the work.

At least for the present my original posting remains true. This project rarely has work, and I'd suggest if you don't like that then

- for now you'd be happier going elsewhere

- it might be worth looking at Scarecrow's Graphs about once a month to see whether things have changed. The 60 day graph will be worth looking at from November 2006 onwards as data collection only started late September. This graph will give a quick backward look at the availability of work. Click on the link for the duration you want, then simply hover the mouse over the link for Results Ready to Send.

River~~
ID: 14943 · Report as offensive     Reply Quote
River~~

Send message
Joined: 13 Jul 05
Posts: 456
Credit: 75,142
RAC: 0
Message 15001 - Posted: 6 Oct 2006, 14:39:24 UTC - in response to Message 14634.  
Last modified: 6 Oct 2006, 14:41:55 UTC

...If you stay on this project, expect to go for months at a time with no work. This is considered normal on this project, so do not expect explanations to be posted of why this happens....


Also please do not post questions about when the next work will come. We don't know. When the engineers discover another gap in their predictions, or when a magnet is replaced with another one with a slightly different field, then they need us again.

Asking when more work will come is like asking when we expect the light bulb to burn out.

If you need work to arrive as predictably as a Swiss train, I would respectfully suggest you'd be happier on another project. Again, in the nicest possible way.

R~~
ID: 15001 · Report as offensive     Reply Quote
Tobie

Send message
Joined: 5 Sep 06
Posts: 4
Credit: 3,124
RAC: 0
Message 15005 - Posted: 6 Oct 2006, 21:48:00 UTC

I hope I am posting in the correct thread ... :)

I have been visiting the main page and server status daily and the message board regularly. I also manually updated LHC several times a day since the disk issue was resolved. Still, I did not see any available or get any WUs :(

Can it be because of my resource share between projects?

ps. don't know if it the correct thread for this question.






ID: 15005 · Report as offensive     Reply Quote
Profile Keck_Komputers

Send message
Joined: 1 Sep 04
Posts: 275
Credit: 2,652,452
RAC: 0
Message 15006 - Posted: 7 Oct 2006, 0:27:59 UTC - in response to Message 15005.  

I hope I am posting in the correct thread ... :)

I have been visiting the main page and server status daily and the message board regularly. I also manually updated LHC several times a day since the disk issue was resolved. Still, I did not see any available or get any WUs :(

Can it be because of my resource share between projects?

ps. don't know if it the correct thread for this question.

If you had read this thread there would have been no reason to post this. This project rarely has work. My 60 day graph at BOINCstats for this project is a flat line.
BOINC WIKI

BOINCing since 2002/12/8
ID: 15006 · Report as offensive     Reply Quote
Miller Wolf
Avatar

Send message
Joined: 4 Oct 06
Posts: 2
Credit: 4,054
RAC: 0
Message 15007 - Posted: 7 Oct 2006, 2:01:31 UTC
Last modified: 7 Oct 2006, 2:01:50 UTC

I really wish this project had more work. The most important and interesting Distributed Computing projects, to me, are those that pertain to Physics, especially Quantum/Elementary Patrticle/High-Energy Physics... and Astrophysics. For Astrophysics, it seems that there is Einstein@Home (has an awesome graphics screen too), and SETI@home (I wish it had a more interesting graphics).

LHC is of great interest to me and I really wish there was a lot more work available for LHC@Home contributors.
ID: 15007 · Report as offensive     Reply Quote
Tobie

Send message
Joined: 5 Sep 06
Posts: 4
Credit: 3,124
RAC: 0
Message 15008 - Posted: 7 Oct 2006, 4:16:48 UTC - in response to Message 15006.  

If you had read this thread there would have been no reason to post this. This project rarely has work. My 60 day graph at BOINCstats for this project is a flat line.


If you read the 2nd message below mine and check Scarecrows graphs, you'll see nearly 8000 WUs went out on the 1st ...

I only posted my msg as I had been checking on 2 machines for new work and missed it somehow and want to try and prevent it if I can change any settings.

ID: 15008 · Report as offensive     Reply Quote
Profile Keck_Komputers

Send message
Joined: 1 Sep 04
Posts: 275
Credit: 2,652,452
RAC: 0
Message 15009 - Posted: 7 Oct 2006, 7:54:42 UTC - in response to Message 15008.  

If you had read this thread there would have been no reason to post this. This project rarely has work. My 60 day graph at BOINCstats for this project is a flat line.


If you read the 2nd message below mine and check Scarecrows graphs, you'll see nearly 8000 WUs went out on the 1st ...

I only posted my msg as I had been checking on 2 machines for new work and missed it somehow and want to try and prevent it if I can change any settings.

In my opinion the best way to set up this project to get work when available is a small queue with a high resource share. That way you normally get more attempts to get work in a given time period. When there is work it will be processed quicker as well, which is desired by the project.

There are participants that feel a large queue is more effective. However this requires more hands on monitoring to check for work and reduces the overall efficiency of the project.
BOINC WIKI

BOINCing since 2002/12/8
ID: 15009 · Report as offensive     Reply Quote
River~~

Send message
Joined: 13 Jul 05
Posts: 456
Credit: 75,142
RAC: 0
Message 15011 - Posted: 7 Oct 2006, 12:55:24 UTC - in response to Message 15008.  
Last modified: 7 Oct 2006, 13:18:11 UTC


If you read the 2nd message below mine and check Scarecrows graphs, you'll see nearly 8000 WUs went out on the 1st ...


Thanks for this question: this is a good point that we have not made clear before, or not in this thread at least.

In early August about 6000 hosts were deemed active by BoincStats, ie had got work when there had been some any time in the previous 30 days.

This suddenly dropped to 250 in late August. What this shows is that there were some small runs between about 20 July and 20 August, but that only 250 of us got any work. That is 250 boxes, fewer participants if some got lucky twice. So while LHC is only releasing short runs there is only around a 4% chance of any given box getting work at all.

It is not that you and I were unlucky - it is that those who got some work were lucky. I have suggested in another thread that the project might like to respond to this by spreading the work more thinly. The majority of the responses show I am in a minority in that opinion (comments either way in that thread please, not here).

You can see the graph I am looking at here (second graph down) but for the moment (until the stats are working again) don't trust the numbers on it after 5th September, the last time LHC provided stats. So sadly, at present, we can't guess exactly how many of us got a look in at the last two work releases, the one just before and just after the disk problems. Also, of course, that is why the pie charts at the bottom show no hosts at all have granted credit in the last 30 days.


I only posted my msg as I had been checking on 2 machines for new work and missed it somehow and want to try and prevent it if I can change any settings.


I can't improve on John Keck's answer to this, apart to add that my strategy of lots of old boxes gives me more tickets in the lottery than if I'd got exactly the same crunch power in a single box. If you can't change the number of hosts then John's is the best advice if you want "set and forget" settings.

John would you like to say how small you mean by a "small" cache and does it depend on the number of other projects? And what proportion of total resource would you suggest giving to LHC?

All the other strategies (and there are several) depend on regularly checking the website and changing settings or suspending other projects when you see LHC has work. These reactive strategies seem to work best when there are occasional large releases of work, like 100,000+ results. They fail on a release of only 8000 as the computers beat the humans in the scramble.

Hope that reassures even if it doesn't help.
River~~

EDIT: typos, added links, added comment about number of hosts
ID: 15011 · Report as offensive     Reply Quote
River~~

Send message
Joined: 13 Jul 05
Posts: 456
Credit: 75,142
RAC: 0
Message 15012 - Posted: 7 Oct 2006, 13:28:20 UTC - in response to Message 15007.  

I really wish this project had more work.
...


Many do, for various reasons.

The original plan was to use this as a time-limited project to help the design. Then the engineers realised that it would be useful to keep the project going as a kind of "stand-by" for last minute adjustments, etc.

So we are already into an extended life. On plan A, LHC@home would have been retired by now.

Then in response to participants' wishes for more work, the Garfield application (which is not directly connected with LHC design, it is to help design detection equipment that can be as relevant for any other particle accelarator) has been co-opted into this project. It hasn't arrived yet, and we do not even have an ETA, but these wishes are being responded to. In parallel with Garfield, there will still be occasional sixtrack runs (the LHC design runs) as and when the beam physicists need them.

R~~
ID: 15012 · Report as offensive     Reply Quote
Andreas

Send message
Joined: 2 Aug 05
Posts: 33
Credit: 2,328,412
RAC: 16
Message 15014 - Posted: 7 Oct 2006, 18:32:42 UTC

So Garfield is on its way, but are there other applications waiting further ahead? And when LHC finally starts producing huge amounts of data, will they be needing us (currently some 72000+ comps) to help with the data analysis, or will that be done completely "in-house"?
ID: 15014 · Report as offensive     Reply Quote
Gaspode the UnDressed

Send message
Joined: 1 Sep 04
Posts: 506
Credit: 118,619
RAC: 0
Message 15015 - Posted: 7 Oct 2006, 19:39:19 UTC - in response to Message 15014.  

So Garfield is on its way, but are there other applications waiting further ahead? And when LHC finally starts producing huge amounts of data, will they be needing us (currently some 72000+ comps) to help with the data analysis, or will that be done completely "in-house"?


Geant4 has been ported to the BOINC platform, but there are issues with the amount of data needed by the application (up to 1Gb).

Once the collider starts operating the amount of data produced continuously for ten years will be truly colossal (10 petabytes per annum) - way beyond the capacity of most (if not all) PCs. The processing will be done around the world, but on mightier machines than the average cruncher can offer.
Gaspode the UnDressed
http://www.littlevale.co.uk
ID: 15015 · Report as offensive     Reply Quote
Andreas

Send message
Joined: 2 Aug 05
Posts: 33
Credit: 2,328,412
RAC: 16
Message 15017 - Posted: 7 Oct 2006, 20:11:01 UTC - in response to Message 15015.  

So Garfield is on its way, but are there other applications waiting further ahead? And when LHC finally starts producing huge amounts of data, will they be needing us (currently some 72000+ comps) to help with the data analysis, or will that be done completely "in-house"?


Geant4 has been ported to the BOINC platform, but there are issues with the amount of data needed by the application (up to 1Gb).


OK, that's a lot :-) Probably needs plenty of RAM...


Once the collider starts operating the amount of data produced continuously for ten years will be truly colossal (10 petabytes per annum) - way beyond the capacity of most (if not all) PCs. The processing will be done around the world, but on mightier machines than the average cruncher can offer.


I hear you, but I like the idea that we could help out in some way. My humble computer isn't the fastest around, but it's available as long as they need it (or as long as I can afford to pay for my Internet connection...).
ID: 15017 · Report as offensive     Reply Quote
Profile Keck_Komputers

Send message
Joined: 1 Sep 04
Posts: 275
Credit: 2,652,452
RAC: 0
Message 15022 - Posted: 8 Oct 2006, 10:01:31 UTC - in response to Message 15011.  

John would you like to say how small you mean by a "small" cache and does it depend on the number of other projects? And what proportion of total resource would you suggest giving to LHC?

By small I mean 0.5 days or less. Probably the ideal size would be 0.17 days since 4 hours is the shortest maximum on the exponential backoff (hmmm that doesn't seem to make sense but it is a real hold point). In general the more projects you have attached the lower your queue size should be, the 2 formulas I use are: desired queue divided by the number of projects and 40% of the shortest deadline divided by the number of projects.

What resource share is harder to answer. Double a normal project seems to work well for me. It is possible to set it too high with 5.4.x and lower clients. Those clients will run dry eventually if a project's LTD is high enough and that project can not supply work. Later clients (testing versions only) will keep at least the connect settings worth of work no matter how out of whack the LTD is.
BOINC WIKI

BOINCing since 2002/12/8
ID: 15022 · Report as offensive     Reply Quote
River~~

Send message
Joined: 13 Jul 05
Posts: 456
Credit: 75,142
RAC: 0
Message 15047 - Posted: 11 Oct 2006, 5:18:49 UTC - in response to Message 15022.  

By small I mean 0.5 days or less. Probably the ideal size would be 0.17 days ... desired queue divided by the number of projects


hmm. Sounded good, and I followed your advice John and set 0.04 days cache on my 11 boxes, that is 0.17 divided among four projects - is that what you meant John?

And it worked in a way, as I did get work for the first time in four releases :-)

Just one task assigned to just one box. :-(

Only one box got lucky, the results were gone so fast that none of the rest got a look in. Clearly to keep up in the distribution of work I am going to need to set a cache that allows me to get more than one task when my box does get lucky, because that is what others are doing. Otherwise I will continue to slip down the position chart as a few others get 40 tasks at a time.

Suddenly I feel glad the stats are not being exported at present.

R~~
ID: 15047 · Report as offensive     Reply Quote
Profile Keck_Komputers

Send message
Joined: 1 Sep 04
Posts: 275
Credit: 2,652,452
RAC: 0
Message 15055 - Posted: 11 Oct 2006, 7:18:52 UTC - in response to Message 15047.  


hmm. Sounded good, and I followed your advice John and set 0.04 days cache on my 11 boxes, that is 0.17 divided among four projects - is that what you meant John?

And it worked in a way, as I did get work for the first time in four releases :-)

R~~

I acctually meant an either/or type thing but if it works then great. Figure up the various ways and use the smallest of them.
BOINC WIKI

BOINCing since 2002/12/8
ID: 15055 · Report as offensive     Reply Quote
1 · 2 · 3 · 4 . . . 7 · Next

Message boards : Number crunching : Please note: this project rarely has work


©2024 CERN