61) Message boards : Number crunching : Please sign BOINC-related petition (Message 13881)
Posted 4 Jun 2006 by Philip Martin Kryder
Post:
I wasn't suggesting that he host the pages, but rather that he provide a "universal BOINC portal" that read the project pages as they are (with the non-standard code (such as "<p>") and output them in his beloved XHTML.


What on earth would be the point of this? ...
<gratuitous rant snipped>...



As I indicated in an earlier post, the point would be to determine based on actual evidence of usage, whether or not it really mattered if the the pages were available in XHTML.

I suspect, as you ranted, that it REALLY DOESN'T MAKE any substantive difference if they are XHTML or not.

But, rather than wasting David Andersons time on some XHTML "grail quest", RYTIS could develop DATA, rather than opinion, to support his view.

In the meantime, he would be making the beloved XHTML availble to all 100 people worldwide who cared.


I would by the way support (and sign) a petition along the lines of "Dear David, we really appreciate all that you do and all that you've done. Don't waste your even reading let alone responding to cosmetic suggestions. Science first."


Mike, I hope that we actually agree that XHTML is a waste of time -

My point was that if any time is to be wasted, it should be that of the requestor and not that of the person who actually developed much of BOINC.

On the other hand, I'm humble enough to recognize that I might be wrong, and so if RYTIS were to build the BOINC XHTML Universal Portal, then he could show its true value with data.


Finally, what did you mean by the phrase "...not appropriate."
I still don't understand that part of your earlier post.


62) Message boards : Number crunching : I think we should restrict work units (Message 13876)
Posted 4 Jun 2006 by Philip Martin Kryder
Post:
I notice that this is slowed down by a minority of users who set their caches to maximum. When the number of work units available hits zero, we still have to wait a week or more while the people who grab a maximum number of units empty their cache before the scientists can even begin the analyzing process.

That doesn't help the project - that's greed by people who want the most LHC units.


Thanks to your clear explanation, I raised my cach for .01 to 10 days.
And yup,
As soon as there was work to do, I was able to get a bunch of it to work on.


I'm sure MattDavis or someone will correct me if I'm wrong, but I thought the original post, quoted in part here, was saying that you *shouldn't* max out your cache. Doing that means you get a lot of work, true. But it also means that the work gets done slower because you're sitting on work that other people with a lower cache (getting work as they complete it) could be doing. Leaving some computers dry is not the best way to get work done promptly. It slows down the process and makes everyone wait longer to get more work. Wasn't that the whole point behind the original post and subject of limiting work units? To make sure that everyone gets a fair share, not to have some people hogging work for themselves while others' computers get left dry?



hmm - You mean that there may have been unintended consequences from starting this thread?

Even so, I'm thankful for the idea.


63) Message boards : Number crunching : Please sign BOINC-related petition (Message 13868)
Posted 3 Jun 2006 by Philip Martin Kryder
Post:





Rytis couldn't host the pages for each project, and in any case, it wouldn't be appropriate.



What do you mean by "not appropriate?"
Can you be more specific?

Further,
I wasn't suggesting that he host the pages, but rather that he provide a "universal BOINC portal" that read the project pages as they are (with the non-standard code (such as "<p>") and output them in his beloved XHTML.

This would provide the world with the standard output that he desires and would provide a "test bed" to see how important the standard XHTML really was with empirical data based on usage and demand rather than "democratic" opinion.


64) Message boards : Number crunching : Relative CPU Effectiveness by Project (Message 13863)
Posted 3 Jun 2006 by Philip Martin Kryder
Post:
Is there any source that tells the relative effectiveness of various CPUs for various projects?

Pechaps a table something like the following:

CPU type LHC SZTAKI SETI Einstein
AMDxxx 1150 900 1153 890
AMDyyy 1200 800 1300 1200
...
INTELaaa 1150 900 1153 890
INTELbbb 1200 800 1300 1200


Such a table would allow crunchers to choose projects based on the best match for their particular machine...


65) Message boards : Number crunching : I think we should restrict work units (Message 13862)
Posted 3 Jun 2006 by Philip Martin Kryder
Post:
I love LHC, and I realize it's different from the other BOINC projects in that it doesn't have continuous work to send out. It sends out work, and analyzes those results before sending out the next batch.

I notice that this is slowed down by a minority of users who set their caches to maximum. When the number of work units available hits zero, we still have to wait a week or more while the people who grab a maximum number of units empty their cache before the scientists can even begin the analyzing process.

That doesn't help the project - that's greed by people who want the most LHC units.

When the number of available units hits zero, the scientists shouldn't have to wait more than a day or two. I suggest that the project limit the number of work units per computer to 2-3 at any given time. That way, as soon as all the work is sent out LHC will get them all back very soon after. Once a work unit is sent back, that computer can have another.

This will speed up work-unit generation for all of us (my cache is set very low and every work unit I get is sent back within 12 hours, since I have other projects running too) since LHC scientists will get their work back faster and thus be able to create the next batch sooner.



Matt - I want to thank you for taking the time to post this and start this thread.

Prior to your having done so, I was have difficulty getting work units to run for LHC.

Thanks to your clear explanation, I raised my cach for .01 to 10 days.
And yup,
As soon as there was work to do, I was able to get a bunch of it to work on.

Again, thanks for your help in showing us how to get the maximum number of work units to process.

Phil

66) Message boards : Number crunching : dual core chips and BOINC (Message 13861)
Posted 3 Jun 2006 by Philip Martin Kryder
Post:
Yes, BOINC can use numerous CPU's.
The PentiumD is not an optimal choice for Dual Core however, consider Core Duo or Athlon64 X2 instead.
Both outperform the Pentium D by a good margin, at far lower power consumtion and less heat.

how do the prices compare?

I'm considering the Dell SC430 with 1 gig of mem and dual core for $499.

67) Message boards : Number crunching : dual core chips and BOINC (Message 13849)
Posted 3 Jun 2006 by Philip Martin Kryder
Post:
I'm considering upgrading to a dual-core intel Pentium D 820.

Does BOINC and LHC in particular able to use both engines?

Does it allow multiple projects to run simultaneously?

Thanks
Phil
68) Message boards : Number crunching : Please sign BOINC-related petition (Message 13848)
Posted 3 Jun 2006 by Philip Martin Kryder
Post:
Help me understand the process.

Does this David Anderson "own" BOINC or the web pages it uses?

Are these pages used in each project or only at BOINC central or where?

How does the fact that they are not XHTML impact your personal goals?


Could you build a "betterBOINC" to front-end the BOINC pages and deliver them in XHTML?

If you could, would they be "so much better" that the world would beat a path to your portal to BOINC?

It seems a more effective technique of pesuasion would be to build your own better frontend.
If it really is "better", people will want to use your pages...
You could even sell ad space...

Or, if "it's really only about the science and not the pages," then natural selection will cause your pages to die out...

Finally, regarding standards.
They are a "good thing(tm)."

But, if they are missing widely desired function (such as <p>) then perhaps they are poorly or prematurely standardized....

Please help me understand better.
thanks
Phil





In a way, David Anderson is the owner of BOINC, in that he's the project architect.
http://boinc.berkeley.edu/contact.php

From the Boinc website : "BOINC is a software platform for distributed computing using volunteered computer resources..."
So, Boinc's only job is to be a distributed computing platform.

Items like 'GUI's are just eye candy, you can remove the graphical part and have a perfectly good distributed computing project.
Theoretically, they could just leave boinc as a command line executable and fulfil the distributed computing requirements.

To streamline things, we would remove graphics and have text based web pages for forums and stats.
Once it's text based, they could just have users telnet in, instead of using HTTP protocol and it's overhead with headers and tags and whatnot.

The forums, which are a drain of server resources, could be replaced by newsgroups on usenet. such as NEWS://comp.distributed

</humor>


thank you.
THat helps set the context a bit.

Is it possible for RYTIS to front-end the BOINC pages and make his own easier to use web site that fulfills his desire for XHTML?

69) Message boards : Number crunching : Please sign BOINC-related petition (Message 13844)
Posted 3 Jun 2006 by Philip Martin Kryder
Post:
Help me understand the process.

Does this David Anderson "own" BOINC or the web pages it uses?

Are these pages used in each project or only at BOINC central or where?

How does the fact that they are not XHTML impact your personal goals?


Could you build a "betterBOINC" to front-end the BOINC pages and deliver them in XHTML?

If you could, would they be "so much better" that the world would beat a path to your portal to BOINC?

It seems a more effective technique of pesuasion would be to build your own better frontend.
If it really is "better", people will want to use your pages...
You could even sell ad space...

Or, if "it's really only about the science and not the pages," then natural selection will cause your pages to die out...

Finally, regarding standards.
They are a "good thing(tm)."

But, if they are missing widely desired function (such as <p>) then perhaps they are poorly or prematurely standardized....

Please help me understand better.
thanks
Phil








70) Message boards : Number crunching : I think we should restrict work units (Message 13832)
Posted 2 Jun 2006 by Philip Martin Kryder
Post:
.... [snip] .....
for what it is worth, I have error detecting and correcting memory on my machine.

I wonder how typical that is anymore...


Common on all servers of all sizes.




sure, but I meant how common is it among the BOINC or LHC crunchers.


71) Message boards : Number crunching : I think we should restrict work units (Message 13821)
Posted 2 Jun 2006 by Philip Martin Kryder
Post:

What do you think the probabilty is of a single bit (or any other) error causing the same incorrect answer in even TWO of the three members of the quorum?


Extremely small, I'd guess.

Sixtrack suffers from the single-bit sensitivity because of the way it handles its numbers, and the fact that it does the operations repeatedly. A single bit error in the first iteration of an algorithm will generate a different erroneous result than the same error occuring at, say, iteration 500,000. Given that a single bit problem can creep in potentially anywhere (and anywhen), the chances of two different computers generating the same incorrect result are vanishingly small.

The same can't be said of the same computer running the same unit twice, however. It is possible that some sort of systematic failure could generate consistent errors at consistent points in the algorithm. Such a computer would probably never generate a valid LHC result, although it might work perfectly well in every other regard.





for what it is worth, I have error detecting and correcting memory on my machine.

I wonder how typical that is anymore...


One of the LHC discussions mentioned the development of libraries that were able to return consistent results on different machines.

If those libraries are used, then it seems a quorum of 2 with replication of 3 would suffice.
But, since the computer resource is "free" and folk often clamor for "more work," it probably leads to higher quorums and higher initial replications.

Has there been any discussion of giving "bonus points" for work units that are finished "quickly".
It would seem this would be useful when errors from the initial replication group necessitated the resending of workunits closer to the deadline...







72) Message boards : Number crunching : I think we should restrict work units (Message 13809)
Posted 1 Jun 2006 by Philip Martin Kryder
Post:
Does anyone think that the reason the initial replication is 5 while the quorum is only 3 is to generate extra work for all the work hungry volunteers?


The five/three ratio is to improve the chances of getting a quorum at the first attempt. It's down to SixTrack's extreme sensitivity to numerical accuracy. In aven the most solid computer there can be the occasional single bit error that will throw the result off. Sending five results should improve the chance of a reaching a quorum, and so reduce the completion time for the study.

From what I see on the results pages, most results reach quorum at three, so a replication of five is redundant. I'd like to know if the fourth and fifth results are still issued if a quorum has already been reached.


What do you think the probabilty is of a single bit (or any other) error causing the same incorrect answer in even TWO of the three members of the quorum?
73) Message boards : Number crunching : I think we should restrict work units (Message 13804)
Posted 1 Jun 2006 by Philip Martin Kryder
Post:
Does anyone think that the reason the initial replication is 5 while the quorum is only 3 is to generate extra work for all the work hungry volunteers?


Previous 20


©2024 CERN