Message boards :
Number crunching :
hoarding
Message board moderation
Author | Message |
---|---|
Send message Joined: 28 Sep 04 Posts: 43 Credit: 249,962 RAC: 0 |
in the lhc update i read lhc@home wants to discourage "hoarding". we, donators of cpu time, can only ajust our general boinc settings. we can try to get work for a maximum of 10 days. simply boinc options. if lhc@home (or any other boinc project) wants to have this in any other way then change boinc. make boinc to notice the difference between projects. maybe then seti@home will accept a 10 day maximum and lhc@home something like 5 days. just do not blame or disqualify the people with a longer reconnect time. we all are just using normal boinc options. |
Send message Joined: 17 Sep 04 Posts: 23 Credit: 6,871,909 RAC: 0 |
they did not blame, they discourage it (for science reasons). lhc does not code the boinc core client, so like you as an user can only adjust settings, they don't have other means as well. in their case, they tuned the deadlines... |
Send message Joined: 2 Sep 04 Posts: 309 Credit: 715,258 RAC: 0 |
If your connect time is longer than the deadline then boinc will not allow any new work to downloaded...so 5 day deadlines set the maximum amount of work downloadable (unless you go into client_state.xml and mess with the various settings). |
Send message Joined: 29 Sep 04 Posts: 187 Credit: 705,487 RAC: 0 |
------ If your connect time is longer than the deadline then boinc will not allow any new work to downloaded... ------ ... assuming you use a core client with that capability. Wave upon wave of demented avengers march cheerfully out of obscurity into the dream. |
Send message Joined: 17 Sep 04 Posts: 103 Credit: 38,543 RAC: 0 |
I've had this discussion over on the seti boards, but I do not see how the cache can be set for each individual project. This is actually assuming that the "connect every X days" means "cache X days worth of work". In actuallity, that is not what it means, but most people believes that it does. If one starts out with one project (project A) and sets its cache to 10 days, then allows it to fill. Now, this person decides to attach to project B and decides that he wants a 5 day cache of work from this project. The problem is, he already has 10 days of work from A. So, does this mean he now has 15 days of work? Project A will not be able to complete its 10 days worth of work in 10 days because it has to share time with B. What you can do, and in someways is implemented by the resource shares, is for a user to say that he wants a TOTAL of 10 days of work, of which 67% is to go to project A and 33% to project B. |
Send message Joined: 28 Sep 04 Posts: 43 Credit: 249,962 RAC: 0 |
@FZB for science reasons lhc@home can neglect the donators choices? lhc@home made a choice to use boinc. this includes the donators choices. and lhc@home did nothing. donators had to find it out for themself. i want to donate cpu time to lhc@home, so i do. i, as a donator have the right to be informed how i can donate this time in the most optimal way. @the gas giant i am a plain donator, i am not messing with anything. @jim baize i think the problem you discribe is solved. the term overcommited might ring a bell? i never returned a result to late. in short, inform the donators of cpu time to this project. if a boinc project wants to use different settings then the general boinc settings inform your donators! |
Send message Joined: 22 Jul 05 Posts: 31 Credit: 2,909 RAC: 0 |
for science reasons lhc@home can neglect the donators choices? Yes. The only reason that this project exits is for it to acomplish scientific work. If a volunteer has choosen settings which do not work well with this, or any other project, he or she will have to stay with different projects. lhc@home made a choice to use boinc. this includes the donators choices. No. LHC, and any other project, will chose their optimal way to work and then it is up to us volunteers to chose projects which works well with our particular combination of hardware and internet access. Doing it the other way round would be a bit like asking victims of war to get injured in a nicer place so the red cross volunteers didn't have to endanger themselves. However I agree that it would be a good idea for both LHC and the other projects to state something about eg deadlines on a visible place on their web site. It is available on this website already but maybe one should put it up together with the other information about the unumber of WUs at various stages. |
Send message Joined: 1 Sep 04 Posts: 506 Credit: 118,619 RAC: 0 |
<blockquote> for science reasons lhc@home can neglect the donators choices? </blockquote> Science? What science? LHC is an engineering project. It has real world deadlines and real world budgets to meet. The LHC@Home team, including ourselves, are providing support to that engineering effort. We can't see the overall picture from where we are, so we take guidance from our project managers as to what work needs to be done and how quickly, and we comply. If this was paid employment the attitude of some crunchers would have had them sacked months ago. Yes - we donate, so donate what's useful to the project. If you can't, or won't, do that then find a project whose activities more closely meet your own aims. Gaspode the UnDressed http://www.littlevale.co.uk |
Send message Joined: 17 Sep 04 Posts: 103 Credit: 38,543 RAC: 0 |
<blockquote>@jim baize i think the problem you discribe is solved. the term overcommited might ring a bell? i never returned a result to late. </blockquote> The term overcommitted is exactly right. To keep the computer out of an over committed state, one must reduce the amount of work on one project to make room for the work of another project. I just thought of an analogy. Our computers accept WU's from the different projects. Each project does not have it's own seperate bin on our computers for their work. All projects work goes into the same bin. When the client wants the next WU it goes into that bin to look for work. That bin can only hold so much work. If you want more work in that bin from project A, the client must keep less work from client B. In short, the work on hand for one project affects the work on hand for all other projects on that client. |
Send message Joined: 15 Jul 05 Posts: 17 Credit: 16,521 RAC: 0 |
<blockquote>in the lhc update i read lhc@home wants to discourage "hoarding". we, donators of cpu time, can only ajust our general boinc settings. we can try to get work for a maximum of 10 days. simply boinc options. if lhc@home (or any other boinc project) wants to have this in any other way then change boinc. make boinc to notice the difference between projects. maybe then seti@home will accept a 10 day maximum and lhc@home something like 5 days. just do not blame or disqualify the people with a longer reconnect time. we all are just using normal boinc options. </blockquote> LHC@Home COULD reduce the max number of results per day that each host can download. This would effectively prevent hoarding, but it would have the unfortunate side-effect of not allowing any more work if all of your results produce bad particles. Right now it is set at 100/day/host, and it is reduced by 1 for every invalid result returned. LHC could set it to 15/day/host, and that would help eliminate hoarding. |
Send message Joined: 17 Sep 04 Posts: 12 Credit: 54,554 RAC: 0 |
<blockquote> LHC@Home COULD reduce the max number of results per day that each host can download. This would effectively prevent hoarding, but it would have the unfortunate side-effect of not allowing any more work if all of your results produce bad particles. Right now it is set at 100/day/host, and it is reduced by 1 for every invalid result returned. LHC could set it to 15/day/host, and that would help eliminate hoarding. </blockquote> LHC is a project that depends on previous result to produce and give out new work. They need the results as fast as possible. Look at the stats, LHC has no work since 3 days and there a still over 11K results "missing". Limiting WUs per CPU - in this case - would have the effect that almost all results were finished and most likely new work is on the way for *all* users. I never understood the sense behind fetching more than 2 days of work. There's always at least one project that has enough work. So why not attaching to multiple projects? That's one reason BOINC was developed. I've seen computers in the stats that collect dozens or hundreds of WUs and returning only a few (maybe due to system crashes or they lost interest). All others have to wait until the WUs are resent. No real problem for a project but annoying when you have to wait for credits or for project devs that need the results to send out new work. If I had to decide I would limit the "get work for x days" to 3 days. IMHO no one needs more. But that's just me... |
Send message Joined: 2 Sep 04 Posts: 309 Credit: 715,258 RAC: 0 |
BOINC limits the number of wu's a host can download based around the project resource share, the connection interval, the estimated time to completion (which is set via the benchmarks and the estimated FLOPS the projects say a wu will need) and the wu deadline. Surely this is sufficent? Live long and crunch. Paul (S@H1 8888) BOINC/SAH BETA |
Send message Joined: 2 Sep 04 Posts: 165 Credit: 146,925 RAC: 0 |
There is a legitimate reason for needing more than a 2 day cache. Some modem users cannot connect every night (some of them it is only once per week - and these should not run projects with shorter deadlines). For always connected users, the optimal thing to do is have a very small cache and several projects (if you have a favorite, by all means set the resource share for that project larger than the resource share for the others). The newer versions of BOINC will handle very different resource shares fairly gracefully (by entering EDF and No Work Fetch and then not downloading work from the offending project while the other projects catch up on CPU time). BOINC WIKI |
©2024 CERN