Message boards :
Number crunching :
Bug in the credit assignment?
Message board moderation
Author | Message |
---|---|
Send message Joined: 23 Oct 04 Posts: 8 Credit: 653,447 RAC: 873 |
Look, please, at following link: 4 users were granted with 32 credits, 1 user was granted with 16. How is this possible? If result of host 37093 was invalid, 0 credics shall be granted. If result was OK, I would expect 32 credits. Result was returned as a 2nd one, so it shall be no difference with others. here is the link Edit: Validate state of the result is invalid. So I should be granted with 0 credits to be fair to others. |
Send message Joined: 23 Oct 04 Posts: 8 Credit: 653,447 RAC: 873 |
> Look, please, at following link: 4 users were granted with 32 credits, 1 user was > granted with 16. How is this possible? > If result of host 37093 was invalid, 0 credics shall be granted. If result was > OK, I would expect 32 credits. > Result was returned as a 2nd one, so it shall be no difference with others. > > > here is the > link > Ups, problem with the link. It shall be http://lhcathome.cern.ch/workunit.php?wuid=300995 |
Send message Joined: 23 Oct 04 Posts: 8 Credit: 653,447 RAC: 873 |
There is also another reason why invalid result shall be granted with 0 credits, than just to be fair to others. If I produce invalid results, but being awarded with credits, I have no indication that something is wrong at my side. I think that nobody checks regulary if he/she is granted with the same amount of credits as others. I've noticed this case just by an accident. It happens sometimes, that others are claiming much less than me, so I get less credits than expected. This case is different, as I've returned invalid result. This may be an accident too, but if I return invalid results more often, I'm not being aware about it. If invalid result is granted with 0 credit, I can see immediately - oooops, something is wrong and try to find out what. Edit: Yeah, I have found that more results are invalid... good to know... |
Send message Joined: 17 Sep 04 Posts: 49 Credit: 25,253 RAC: 0 |
Not sure that is a bug, the WU was a sucess but was invalid cause it did not match the rest of the crunchers results, would be my quess..... BOINC Wiki |
Send message Joined: 13 Jul 05 Posts: 64 Credit: 501,223 RAC: 0 |
> Not sure that is a bug, the WU was a sucess but was invalid cause it did not > match the rest of the crunchers results, would be my quess..... > Invalid is invalid is 0 credit, no matter the cause imho. Getting more or less than claimed is nothing unusual, as Hefto99 said, and as long as I would get credits, I would presume that I had only valid results. Can anyone explain why this project has decided to change the common Boinc policy? Grüße vom Sänger |
Send message Joined: 2 Sep 04 Posts: 378 Credit: 10,765 RAC: 0 |
I think in this case, we should ask.. what is it about this host, that causes a percentage of it's results to not match? http://lhcathome.cern.ch/results.php?hostid=37093 This goes back to the Intel versus AMD floating point differences found 259 days ago. What would typically happen was that sometimes they'd validate on AMD results, and sometimes on the Intel results. Back then, if you were a Pentium user, and your work unit was sent to 3 AMD's as well, you might have gotten zero credit. Back then, it wasn't fair to penalize someone for crunching a on a different CPU than his workunitmates. http://lhcathome.cern.ch/FAQ.html#2.5 "If work unit has 10 successful results but there are not enough identical results, the results are granted points according to how well they match with other results. The best-matching result will receive median average of claimed credits (calculated in same way as the canonical credit). Other results will get credit proportional to the best credit and result's match points. Although users get credit from this kind of results, the results are not used in physical studies." I'm not the LHC Alex. Just a number cruncher like everyone else here. |
Send message Joined: 13 Jul 05 Posts: 64 Credit: 501,223 RAC: 0 |
> This goes back to the Intel versus AMD floating point differences found 259 > days ago. > What would typically happen was that sometimes they'd validate on AMD results, > and sometimes on the Intel results. Back then, if you were a Pentium user, > and your work unit was sent to 3 AMD's as well, you might have gotten zero > credit. > Back then, it wasn't fair to penalize someone for crunching a on a different > CPU than his workunitmates. Why wasn't homogenous redundancy introduced, instead or credit tweaking? Grüße vom Sänger |
Send message Joined: 17 Sep 04 Posts: 190 Credit: 649,637 RAC: 0 |
..because this project is a scientifice based project;---) I guess, a valid result in match is more usuable to the Wissenschaft as a result "differing" to much. The base of this project is not like scanning something already done, like at SETI or Einstein (analysing prerecorded Datas), no, here the experiments are "generated" and the client is calculating this. Just look at sztaki, the times there are crazy too:) So far my understanding, perhaps not correct. The message is, the runtime of a WU can be in a small time toleranz differing not much in time, depending the experiment, it also can finished in a much shorter time. This will also have an influence to the "credit" calculation. It looks like, you have spend to much time at SETI (*G*)... Enjoy the ride "as it is" and crunch whatever you can and get auf die Dauer hilft nur Power.. |
Send message Joined: 13 Jul 05 Posts: 64 Credit: 501,223 RAC: 0 |
> ..because this project is a scientifice based project;---) So is at least Einstein, Predictor, Folding and CPDN, Seti is indeed open for discussion ;) > I guess, a valid result in match is more usuable to the Wissenschaft as > a result "differing" to much. Define "valid"! Validated against what? Only real validation is of course against reality, and as long as Heisenberg doesn't jump in, there is only one of it. If against other results, and the outcome of different CPU/OS combinations varies in an inappropriate manner, there are imho the following possibilities (don't beat me, if I miss some): 1. The results of one of the combinations are not valid and therefore have to be restricted from this project. 2. The code is somehow bad written/compiled, resulting in a too big system dependancy. 3. The validation restrictions are too strict, and don't comply with the reality test. As I'm neither programmer, nor hardware builder, I can't say what's the cause, and definitely won't blame anyone for this. > The base of this project is not like scanning something already done, like at > SETI or Einstein (analysing prerecorded Datas), no, here the experiments are > "generated" and the client is calculating this. It's a bunch of mathematical operations over a set of data, resulting in a result ;) The origin of the set of data is irrelevant, but if the operations come to reasonable different results via different CPU/OS combinations, something seems to be murky. > Just look at sztaki, the times there are crazy too:) Can't read hungarian ;) > So far my understanding, perhaps not correct. > The message is, the runtime of a WU can be in a small time toleranz differing > not much in time, depending the experiment, it also can finished in a much > shorter time. > This will also have an influence to the "credit" calculation. Runtime has nothing to do with this. Different runtimes result in different credits, that's fine, and it's the same in almost every other projects (besides CPDN and Folding). But the same WU should of course run the same CPU-cycles on different puters, and therefore get the same amont of credit. If it comes to a different number, the validator should decide who's wrong, and who's right. And that's imho a question of all or nothing. > It looks like, you have spend to much time at SETI (*G*)... > Enjoy the ride "as it is" and crunch whatever you can and get I've been with Seti, CPDN, Predictor, Einstein, Lattice, Folding so far, and I enjoy my ride here. But I still wait for a sufficient answer to the why? > auf die Dauer hilft nur Power.. Latürnich! Grüße vom Sänger |
Send message Joined: 2 Sep 04 Posts: 378 Credit: 10,765 RAC: 0 |
Maybe Hefto99 could check his pc. Here's a floating point test program http://www.jhauser.us/arithmetic/TestFloat.html You can run testfloat from a command prompt. G:\testfloat>testfloat -all -errorstop I'm sure the overclockers on this site could recommend a few motherboard temperature testing programs. I'm not the LHC Alex. Just a number cruncher like everyone else here. |
Send message Joined: 2 Sep 04 Posts: 545 Credit: 148,912 RAC: 0 |
Interesting test ... I need to add it to the testing section ... thanks ... |
Send message Joined: 23 Oct 04 Posts: 8 Credit: 653,447 RAC: 873 |
Hi all, thanx a lot for your responses. My problems were caused by a strange combination of OC+BOINC(LHC)+eMule. I overclocked my machine a little bit, then run Prime95, it was working without problems. Then I produced valid results in BOINC projects (including LHC) several weeks with this configuration ... until I started to run eMule. Since that time, I started to produce invalid results in LHC. Normally, I would not notice that, because I was awarded with credits for such results. I was just started wondering, why I get just half of clamied credits for so many WUs. Then I have found, that others get much more. E@H was working well all the time (probably not so sensitive as sixtrack). So , therefore I think that I shall not get credits for an invalid result. I have just 1 host for LHC, so it is easy to monitor results and behaviour on the system. But if somebody has many hosts and one of them starts to behave in a strange way, it is not easy to find it out if gredits are granted. 0 gredit for WU would indicate that there is a problem with this host... I run 2 days Overclocked system only with BOINC, and my results are OK again. So it is interesting, how other software may influence BOINC results... Anyway, I'm back on original frequency, just to assure that I will produce valid results all the time, regardless what else is running on my system... |
Send message Joined: 17 Sep 04 Posts: 54 Credit: 2,202,983 RAC: 141 |
> So it is interesting, how other software may influence BOINC results... > Anyway, I'm back on original frequency, just to assure that I will produce > valid results all the time, regardless what else is running on my system... > Gave ya + rating for this comment. Because it is hard to believe. Hard to admit. But it is true. In the end, overclocking is not worth possible harm to data integrity. |
Send message Joined: 2 Sep 04 Posts: 378 Credit: 10,765 RAC: 0 |
> Interesting test ... I need to add it to the testing section ... thanks ... > That test tests out Divides, Adds, Multiplies, and SquareRoots at various floating point precisions. Not sure if it is useful for the trig functions... (I'm not an expert at knowing what floating point stuff is done by sixtrack, but I think Chrulle or Markku mentioned it was a Tan function which was the previous culprit.) There's even a IEE754 Wiki (Which is the standards for floating point math) http://en.wikipedia.org/wiki/IEEE_floating-point_standard I'm not the LHC Alex. Just a number cruncher like everyone else here. |
Send message Joined: 2 Sep 04 Posts: 545 Credit: 148,912 RAC: 0 |
Thanks for the extra link ... I did not do a Wiki search :( Anyway, it is enought that I added it ... better late than never ... |
©2024 CERN