1) Message boards : Number crunching : Work to be done! (Message 13707)
Posted 23 May 2006 by Honza
that's also a BOINC-problem. Why can't we just select cache size for every single project and every single computer? There's only a general setting. At least I have only found this one. I've set mine to 2 days, as I have some computers on some projects, that do not have a permanent connection to the net. But having set the cache to 2 days also means, that some other machines are downloading more than they need. Hopefully BOINC-developers are changing this one day.

What you suggest would not work and would break consistency of deadline and work fetch policy.
For example: if you choose to have cache size for 5 days on one project, and only 1 day for another, the later would get no work when cache of first one would be filled for 1+ days. There are resource share, long term debts etc that plays the role.
Or what MAGIC says - keep it balanced.

ad. Boinc dev. They already did - Preferences override file

You can also use BoincStudio to manually ask for desired amount of work from any project.

>But having set the cache to 2 days also means, that some other machines are downloading more than they need.
I quite don't understand that. Each Wus has a time-to-complete estimation and BOINC prevents from downloading more Wus than cache or that can be completed prior deadline. Familiar with "computer overcommited"?
2) Message boards : Number crunching : can't download (Message 11778)
Posted 2 Jan 2006 by Honza
Well, that's another aspect of BOINC scheduler system and it's not easy to figure-out all and with all connections.
I believe -but may be wrong - that resource share is a plain figure that divides CPU times among projects. So, LTD is equality (or better say proportionally) affected by resource share. Statistically I would say that there is a close-to-perfect correlation or resourse share and LTD - albeit I have not enough data to provide a significant test :)

You may find some info about scheduling on wiki. According to it, STD is involved in switching of WUs/projects (should not affect amount of work downloaded for example).

P.S. Seti is down [again] so we may ask there...later.

Resource share and STD are quite clear and simple concepts; LTD with connection to suspend project/WUs is still a bit unknown land.
3) Message boards : Number crunching : can't download (Message 11776)
Posted 2 Jan 2006 by Honza

recently, my curiosity came about the same topic.
I have manually set LTD to zero among all projects on one dual-core machine. I'm running CPDN and SETI, with other projects attached - all of them with the same share (100%). Out of 8 project, 4 are with "No new work" and LTD is still zero. Orbit has no appplication for Win so it never got any work and hence I assume that it's why LTD is also no increasing.
LHC's LTD is increasing wile CPDN and SETI is descreasing (as those two project provide work on regular basis).

This summary covers only one part of your question: "no new work" scenario.
For those with no new work, LTD is no increasing as one may expect.

I may try the same with "suspend" scenario on another machine attached to 11 projects (no with no new work for all of them, running two CPDN SpinUp model (alfa/beta)...once I have time and am in a mood to do so...
I would expect LTD to be increasing on project with 'suspend' state.
I would also expect that STD will be increasing when particular WU(s) are suspended.

I believe most users kept this setting untouched so once LHC acutally has some work, BOINC scheduler on their machines will start with LHC project, download as many WUs as set in general settings (until their are limited with other projects dead-lines) and finish them quite soon. This may stress LHC server to some degree...but isn't it general problem of all BOINC projects after 'recovery'?

Hope it help.
4) Message boards : Number crunching : What is happening with Geant4? (Message 11502)
Posted 28 Nov 2005 by Honza
Well, the problem is that when the physicist do a simulation, they compile the model together with the environment which then gives a 1 gb executable. This binary can then be used for a few runs but nothing that would warrant such a large download.

Thanks for clarification.
I was sure about 'which' 1GB we are talking about: WU download (a lot of analyzed adata), WU upload (a lot of results), WU folder (a lot of results generated by application like CPDN), memory usage (still fine with me :-) or perhaps whole slot (application + data, or the enviroment).
So, it is the latest in the list.

A need for split is evident.
Or perhaps, there are some features of BOINC - limit some WU types according to user profile (machine and internet connection specification) or user optional.

I'm sure LHC team will find a way..."The only problem is that that will probably take longer. ;-"
5) Message boards : Number crunching : When Will LHC Server Be Updated to Handle BOINC 5.1.*? (Message 10682)
Posted 10 Oct 2005 by Honza
<blockquote>A BOINC v4.xx client doesn't need any patches. The BOINC V5 server version is backward compatible with V4 clients.
Hence how the people still using 4.19, 4.25 to 4.45 and 4.72 on Seti@Home can still receive work from a Version 5 server. No patches needed. </blockquote>
Version 4.x has bugs. For example benchmark bug that sometimes corrupts CPDN model during benchmark (small 10 seconds timeout to close ongoing computation before benchmark start, IIRC).
Still it is good that one can use Boinc 4.x with 5.x servers...
6) Message boards : Number crunching : No work sent (Message 9293)
Posted 9 Aug 2005 by Honza
I suggest editing client_state.xml under section of LHC and clear long term debt to zero


Set time statistics to 1.000.


This will says BOINC that your computer is 100% of time and prevent from this message. Note that this may actually lead to what BOINC says - you might not be able to finish in time when your computer is running BOINC only couple of hours a day.
7) Message boards : Number crunching : WU's not listed in the resultstable. (Message 7983)
Posted 6 Jun 2005 by Honza
Ghost WUs?
Sdheduler problem?
Sounds like CPDN Database trouble...

©2022 CERN