Message boards : Number crunching : Why my Granted Credit is just the half GC than other who did same work?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
Sepionet

Send message
Joined: 3 Sep 05
Posts: 27
Credit: 35,521
RAC: 0
Message 12365 - Posted: 24 Jan 2006, 15:53:06 UTC - in response to Message 12341.  



Perhaps my credit sent are on workunits with errors. I don't think so, but just now i'm doing a memtest because on 2nd Jan i added 1 gig of RAM, and probably that is the answer, because of high precission on Sixtrack aplicattion.

About, the median, it's good and fair, and probably there are no penalties.

Who doesn't believe in median and still believe in cheating please Use Openoffice Calc o M$ Excel, 4 coulumns and hundred of rows( 3 random Claimed Credit,1 with yours C.C do median of 4 and you and you will be convinced, not an apreciable cheat or abuse is possible.

I like very much participate in such a big calculation like this, and the forum repplies are a lot.

Did you know that optimized clients could spend 1/3 times the time (on average) than official x86 compilations.

If software ( free or Gnu soft, also) could all being optimized computers will run like light on vaccum. jajaj lol

Windows 3 times faster on same processor. Pentium 3.8GHz@12GHz with no overclocking, WOW.

And using hardware specific calculations XX times faster, when a Science Accelerator, like 3d Graphics or sound, or Mpeg encoders...
When the physics in one Chip?
ID: 12365 · Report as offensive     Reply Quote
Sepionet

Send message
Joined: 3 Sep 05
Posts: 27
Credit: 35,521
RAC: 0
Message 12367 - Posted: 24 Jan 2006, 16:41:18 UTC - in response to Message 12365.  

Already tested, no memory problems, and overclocking is just alittle, AMD xp2500+ 1833@2005MHz rock stable. So it is misterious.
ID: 12367 · Report as offensive     Reply Quote
Profile Mr.Pernod
Avatar

Send message
Joined: 16 Jul 05
Posts: 65
Credit: 369,728
RAC: 0
Message 12380 - Posted: 24 Jan 2006, 18:58:58 UTC - in response to Message 12367.  
Last modified: 24 Jan 2006, 19:01:34 UTC

Already tested, no memory problems, and overclocking is just alittle, AMD xp2500+ 1833@2005MHz rock stable. So it is misterious.

you are still returning invalid results, so I suggest you drop it down to stock speeds and see how that works out when you get some more work from LHC
ID: 12380 · Report as offensive     Reply Quote
Sepionet

Send message
Joined: 3 Sep 05
Posts: 27
Credit: 35,521
RAC: 0
Message 12408 - Posted: 25 Jan 2006, 17:37:07 UTC - in response to Message 12380.  
Last modified: 25 Jan 2006, 17:51:13 UTC

This processor had valid credit when running @2333MHz, and from setember to january has runned at actual speed 2000MHz 1.45V(becuse is mobile barton on Abit nf7 v2.0.
I have 12 pc running with no invalid workunits some overclocked about 10%.
My Pc with Sempron (palermo core 1600@1800MHz) processor has double the perfomrance than this Athlon @2000MHz and i've seen PCs of other users (sorry i have no link) in same case of mine, with half their credit. Invalid, but why? I dont'know.
computers with incredible high results and valid state are present but in other projects but not in this at the moment. So penalty seems not the reason, but what is if not?

In setiathome there's a Pentium 4 with 32 processors, (32 maximun virtual processors on vmware...ummmmm), take a look on Seti Top Hosts: there is Athlon xp2200+ thousands of granted credit a day but only 7.5credit/h oooohhh!Yeahhh!

or Pentium4 D2.8 with a 64 credit/h, it is double the speed of mine with optmized client, 45min/wu (1h3min/wu mine) but Benchs test thera are 10x Mips Flops and all of them. Rocket Pentium and Linux clients with really good performances.

It smell like cheats.

The very best of very bests is: Authentic Unknown CPU Type. lol

at the moment having changed to boinc client had no effects on correct validation.




ID: 12408 · Report as offensive     Reply Quote
Deamiter

Send message
Joined: 28 Sep 04
Posts: 4
Credit: 1,063,233
RAC: 0
Message 12409 - Posted: 25 Jan 2006, 17:59:48 UTC - in response to Message 12408.  

This processor had valid credit when running @2333MHz, and from setember to january has runned at actual speed 2000MHz 1.45V(becuse is mobile barton on Abit nf7 v2.0.
I have 12 pc running with no invalid workunits some overclocked about 10%.
My Pc with Sempron (palermo core 1600@1800MHz) processor has double the perfomrance than this Athlon @2000MHz and i've seen PCs of other users (sorry i have no link) in same case of mine, with half their credit. Invalid, but why? I dont'know.

So you're overclocking, and you're seeing errors, but you have no idea why?!?!?

Come on! There's no way you can claim, "I overclocked this other machine without errors, so I can do this one too!" If you're overclocking, you should know very well that each box will react differently. Then you're claiming that since you got a valid WU at 2.3GHz, that it's fine at 2GHz. Isn't it JUST possible that you get lucky with no errors during some WUs? Haven't you ever seen one of your overclocked need to be clocked back to specs after a few months as the processor gets old?

Your first reaction should ALWAYS be to stop overclocking. If you're still returning bad results, then it's something else.


In setiathome there's a Pentium 4 with 32 processors, (32 maximun virtual processors on vmware...ummmmm), take a look on Seti Top Hosts: there is Athlon xp2200+ thousands of granted credit a day but only 7.5credit/h oooohhh!Yeahhh!

You know you can merge CPUs right? They just merged a bunch of unique hosts to inflate their score -- no big mystery there.
*snip*
It smell like cheats.

So you're digging around in the stats pages looking for cheaters. When you find a couple, you're response is that you think YOU should get highly inflated scores too! I guess I don't see why we should feel particularly sympathetic that your attempts to artificially inflate your scores are failing.
ID: 12409 · Report as offensive     Reply Quote
Sepionet

Send message
Joined: 3 Sep 05
Posts: 27
Credit: 35,521
RAC: 0
Message 12415 - Posted: 25 Jan 2006, 19:41:54 UTC - in response to Message 12409.  

I'm only comparing.
I'm not trying inflating my credits, but I only want if there must be a granted/claimed credit I want clear information about it. What are the hypothesis used, criteria of precission, reasons of not going optmized clients.

A distributed computer project must be reliable, fiable and more powerful the more open source it is.

i'm looking for if it's secure and fiable, so I don't understand Claimng and not being sure about what means Granted Credit.

When I first posted this thread people wondered if I was trying to cheat. Curiosity is essence of science.

Have you tried running multiple instances of boincclients, slowering calculations or running in virtual pc emulators? And what about merging computers in only One?

I tried and I have my own conclusions, because I haven't succedeed any attempts. Good news because it indicates a good programing job.

And what about security? You run Boincmgr.exe, It runs boinc.exe and n clients of the n projects you have attached, but with appropiate xml you can run any program. Look for installation of optimized clients on Seti@Home.

Grid will be more revolutionary than Inernet, and projects like this helps Grid will be a reality. I'm interested in how things work.

Physics is what we can say of how things work, and not Why things work the way they do.

Pure curiosity.

Only with curiosity you will try unexplored ways and you will learn a lot of things, perhaps discovered by others.
ID: 12415 · Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 2 Sep 04
Posts: 545
Credit: 148,912
RAC: 0
Message 12418 - Posted: 25 Jan 2006, 20:42:33 UTC

All of your qustions are answered in the Wiki ...

I point you there because we have worked on the explanations for over 2 years, so they are pretty complete and as clear as we can make them. Search on "Credit" then follow links around to the other terms you don't know ...
ID: 12418 · Report as offensive     Reply Quote
Sepionet

Send message
Joined: 3 Sep 05
Posts: 27
Credit: 35,521
RAC: 0
Message 12422 - Posted: 25 Jan 2006, 23:10:19 UTC - in response to Message 12418.  

All of your qustions are answered in the Wiki ...

I point you there because we have worked on the explanations for over 2 years, so they are pretty complete and as clear as we can make them. Search on "Credit" then follow links around to the other terms you don't know ...


Wiki Wiki. It's ok. Thanks for the link.
ID: 12422 · Report as offensive     Reply Quote
Profile Mr.Pernod
Avatar

Send message
Joined: 16 Jul 05
Posts: 65
Credit: 369,728
RAC: 0
Message 12445 - Posted: 26 Jan 2006, 12:35:18 UTC

Sepionet,

people trying to "cheat" on RAC and people returning invalid work to a project are two totally different things.
The answer to your original question is not in cheating, but in returning invalid results.

So, let's take a look at invalid results.
Unfortunately LHC@Home is not showing the "stderr out" info for results, so we can't analyze that for errors.
But there is something else we can look at, namely result runtimes.
Your slightly overclocked computer is taking a lot less time to compute a result than a comparable computer.
For LHC short computing times usually mean the beam/particle trajectory is set on a crash-course.
If one computer puts the beam/particle on a crash-course, while the other computers calculating the same result don't, something in that one computer is causing the calculations to return a different answer than what is to be expected.
This difference can be caused by a different brand or model processor or by an error in (for example) the floating point unit of a processor, either a design error, a manufacturing error or just plain overclocking.
Under normal usage of this "faulty" cpu, those errors might never show up (think back to the early Pentiums), but when running specific applications that require a high precision and accuracy in their calculations they will show up, even if they "only" happen at 13925 position behind the decimal point.

A number of your results have been crunched by very similar processors (same brand, same core), so one would expect the calculations to produce the same numbers.
Well, they don't produce the same numbers. In fact, the numbers produced by your overclocked cpu put the particle on a course into the wall of the facility near Geneva or on a course to Paris, Berlin, New York or perhaps even to the moon, instead of round and round in that ring burried in the Alpes, where the rest of the computers are sending it.

Put that computer back on stock speeds as soon as possible and keep very close watch on the results you return.
ID: 12445 · Report as offensive     Reply Quote
Nuadormrac

Send message
Joined: 26 Sep 05
Posts: 85
Credit: 421,130
RAC: 0
Message 12448 - Posted: 26 Jan 2006, 13:01:24 UTC - in response to Message 12232.  

First of all you are claiming a lot more than you should by using the optimized CC, and some people consider this cheating or borderline cheating...


Luckily this is coming to an end with the intruduction of FLOPS counting. Currently tested at SETI BETA, to be introduced with SETI Enhanced, and the upcoming Malaria Control. Bruce from E@H is looking into it too.

Michael


Meh, hopefully it balances it all out. With an optomized SETI client, one can well see that completing the same WU in less time (due to science app optomizations), and in that case an optomized CC might make sense. However on other projects, it can inflate one's claimed credit. So, if a person using an optomized SETI app, goes to run other projects also (other then Rosseta and CPDN which doesn't use a quarum), what they'd do to keep the credits fair accross all projects, optomized science app or none at all (where not avaiable).

The people with an optomized SETI app, claim credit for the actual work they do, and no optomized benchmarks to then effect other projects. Sounds good, if it will manage to sort all of that out...
ID: 12448 · Report as offensive     Reply Quote
Nuadormrac

Send message
Joined: 26 Sep 05
Posts: 85
Credit: 421,130
RAC: 0
Message 12455 - Posted: 26 Jan 2006, 13:43:57 UTC - in response to Message 12339.  
Last modified: 26 Jan 2006, 13:55:21 UTC

Sepionet,

There are either of 2 problems going on. And though you might not like the answer, you're best advised to check them out:

- Either you're overclocking that machine beyond it's capability in producing good results (aka a hardware problem), and you really should try easing off on that clock.

Whatever you got on different hardware is irrelavent. Hell, when I swapped out my RAM (you did mention a memory upgrade), I had to re-test things as my old OC value (and I have overclocked my A64 here) would no longer play with the new RAM. It isn't just the CPU itself one can end up having to test for, but the RAM also. Anyhow, 2 CPUs will not OC to the same amount simply because they have the same name on the box or whatever. What often times happens is either one can have a higher clocked CPU (OK, Intel used to do this back in the day, can't say they still do), but they have too many say Pentium 133s. Now, Intel doesn't want to flood the market with so many, as it might tend to lower prices. So, to keep the supply from becomming more abundent, they re-mark some as say Pentium 100s, and then sell them as such. It was really a Pentium 133 but re-binned to not be sold as such. If yeilds were too high for their cases, and they didn't want to lower prices (aka make less money on the higher clocked unit)...

The other matter is that, there at least should be some margin of timing in the given piece of hardware. If there isn't, one isn't necessarily going to fair well. Reason is, a given clock pulse isn't gaurenteed to always time the same. Lets say one is running a 200 MHz fsb. The crystal that generates the 200 MHz fsb, will not always generate this. A margin of error in the timing means that every once in awhile the given clock might only be 199 MHz, and every once in awhile, it will be 201 MHz. If one doesn't have margin, the time it generates at 201 MHz (not because of OCing, but because the timing crystal isn't a "perfect" clock generator) could result in a crash. I have an A64 now, and due to CnQ (coon n quite throttles the CPU clock back when the CPU isn't under load), the stuff that comes with the mobo to monitor this no longer shows a fixed clock, but it varies with changes in the actual clock.

In my case, CnQ is disabled (hey I don't want to have the clock lower then I'm trying to run a WU, bleh), however it still monitors. The clock is usually consistent about where it should be, but every once in awhile the CPU clock is shown to be a unit of MHz slower then the usual clock (equal to the multiplier itself), or the same number of MHz faster. This is normal, from what I just mentioned, and looking at the physics of it, itself.

- The other possibility is that you're optomized client isn't returning the same result that people are getting with a standard client (aka a software error). If this is the case, you really should either go back to the default, or find another optomized science app. Optomizing to make better use of the CPU is fine, as long as the results returned are the same. If it gets better crunch time, by leaving some of the science out, or by short cutting it's way through the WU (as opposed to an instruction set optomization which does result in comparable results) then this is not OK, and of no use to the project itself.

The sooner you get about testing this out, the better off you'll be. Arguing about why you should get more credit, will not result in the project managers giving you more credit for INVALID results which have no science value to them whatsoever. It's time to get down to resolving the issue, which underlies your computer returning bad results...

Now, how much margin one has, depends on the specific unit, and one CAN NOT make generalizations from one to the other. The only thing the manufacturer gaurentees to work is the default clock or lower. Anything above, it's the luck of the draw.

An optimized Science Application is a version of the science program compiled with a specific processor in mind. This makes the software run as fast as possible. The primary effect is to complete work faster. As a consequence the credit claim will be lower (shorter time). Some hold that this "cheats" them of credit.


Of a bigger issue IMO, is what it does to other's claimed credit. If one claims less, but gets more work done, given the quorums themselves, one should already be seeing more credit (assuming everyone isn't using an optomized client). They'll just tend to see more credit granted, then they claimed in many cases.

But what happens when 2 people use an optomized science app? The median value is that of an optomized client, and the 3rd person who ran the default science app, does have their own credit under-cut by the 2 other people who ran an optomized science app. It's more a concession to that possible third person (who really did spend more time crunching the WU, and not due to a slower machine), that at least in my mind it might be fair to then used an optomized CC, to try to get the benchmark to balance out. True, they could have downloaded an optomized science app too, but should they get less credit simply because they ran with the default? Hmm...

So, they want to run an optimized BOINC Client with the Optimized science applications to bring everything back into "balance". The problem is that there is an implicit assumption that the optimization of the BOINC Client is indeed proportional to the speed up of the science applications. It may or may not be true. Also, different science applications run with better or worse efficiency on specific CPUs. So, optimization to "balance" for SETI@Home may unbalance Einstein@Home


Yeah, this can be a problem, and of course because many of us run more then one project. What might help even it up for others who might be crunching the same WU as one's self, could end up having the opposite effect on another project where one doesn't have an optomized science app. Good point also, that the level of optomization, might not be equal on 2 pieces of software...

The good news is that the newer method of counting FLOPS seems to be more stable and perhaps we can put this nightmare behind us ... though I have a feeling that the cat is out of the bag and the argument is still going to be that the modifiers know more about what the claim and grants of credit should be than the projects.


One can hope. Certainly sounds good in theory. And if it can actually balance things off where an optomized science app is used, without sending other projects askew (as the current situation can), all the better...
ID: 12455 · Report as offensive     Reply Quote
Sepionet

Send message
Joined: 3 Sep 05
Posts: 27
Credit: 35,521
RAC: 0
Message 12563 - Posted: 29 Jan 2006, 11:56:04 UTC - in response to Message 12455.  

It looks like a real forum. Wow.

I have deatached L@H, but I have some Idea on FSB could be the problem because I'm using 166MHz (typical barton speed) but MObile Barton its's 133MHz i would try testing precission running Matlab calculations, or any precission test if I found them with three different Pc. And the one that fails probably in precission with stock FSB and FSB@166MHz. It could be the source, The time generator, that adds some kind of imperceptible latency.

i have seen this sffect of FSB when running Memtest with same hardware but with different speeds, probably, there is no problem in Ram but in returned value or result of a long calculation iteration.

I will do a script of milions +1 -1 adds plus precedent result, and double precission, single, float, integer, has to be 0 (that it will not be but I'll discover in what ratio).

If anyone has an AMD-M xp2500+ (Barton 1.45V) running on and Abit NFS v2.0 please could you explain if you have any Invalid result problems?
ID: 12563 · Report as offensive     Reply Quote
Sepionet

Send message
Joined: 3 Sep 05
Posts: 27
Credit: 35,521
RAC: 0
Message 12611 - Posted: 1 Feb 2006, 22:26:12 UTC - in response to Message 12563.  

Problem was FSB, 15x133MHz was Precission stable.in stead of 12x166 rock stable.
ID: 12611 · Report as offensive     Reply Quote
Profile Lee Carre

Send message
Joined: 13 Jul 05
Posts: 23
Credit: 22,567
RAC: 0
Message 12661 - Posted: 7 Feb 2006, 1:41:50 UTC - in response to Message 12455.  

The good news is that the newer method of counting FLOPS seems to be more stable and perhaps we can put this nightmare behind us ... though I have a feeling that the cat is out of the bag and the argument is still going to be that the modifiers know more about what the claim and grants of credit should be than the projects.


One can hope. Certainly sounds good in theory. And if it can actually balance things off where an optomized science app is used, without sending other projects askew (as the current situation can), all the better...

Pappa is doing a great job of testing this out at seti beta, it does all the things it's ment to do

and an optimised FLOPS app claims the same as a standard FLOPS app for the same amount of work

so everything works just as it should :)
ID: 12661 · Report as offensive     Reply Quote
Previous · 1 · 2

Message boards : Number crunching : Why my Granted Credit is just the half GC than other who did same work?


©2024 CERN