Message boards :
Number crunching :
Too low credits granted in LHC
Message board moderation
Author | Message |
---|---|
Send message Joined: 4 Jul 07 Posts: 17 Credit: 35,310 RAC: 0 |
I think LHC is giving too low credits to theirs participants. For example: LHC Task ID click for details Computer Sent Time reported or deadline explain Status Run time (sec) CPU time (sec) Claimed credit Granted credit Application 304660 9931946 22 Sep 2011 4:09:15 UTC 22 Sep 2011 16:34:44 UTC Completed and validated 17,295.55 15,779.43 88.70 88.08 SixTrack v530.08 304661 9929870 22 Sep 2011 4:08:11 UTC 22 Sep 2011 11:11:10 UTC Completed and validated 17,255.95 16,319.18 87.46 88.08 SixTrack v530.08 about 0.00558 credit per second PrimeGrid Zadania click for details Komputery Sent Time reported or deadline explain Status Run time (sec) CPU time (sec) Stwórz Aplikacje 302224595 206320 22 Sep 2011 | 13:20:05 UTC 22 Sep 2011 | 20:31:31 UTC Completed and validated 1,083.38 1,078.15 19.68 PPS (LLR) v6.09 302227986 175914 22 Sep 2011 | 13:28:34 UTC 22 Sep 2011 | 14:47:53 UTC Completed and validated 1,488.86 1,218.30 19.68 PPS (LLR) v6.09 about 0.01615 cretit per second it is 3 times more than in LHC (on the same PC) I know LHC is very important project for science and I do not participate for credits, but I think LHC should reward it's participants with higher credits, like others project do. Of course it is no high priority for admins, but they should look at this in near future... Nawiedzony |
Send message Joined: 25 Jan 11 Posts: 179 Credit: 83,858 RAC: 0 |
LHC is awarding proper credit. Primegrid is awarding too much credit. Primegrid should change, not LHC. |
Send message Joined: 4 Jul 07 Posts: 17 Credit: 35,310 RAC: 0 |
LHC is awarding proper credit. Primegrid is awarding too much credit. Primegrid should change, not LHC. So compare to other projects and You most often will get credit/s value 1.5-2.3 times higher than in SixTrack. |
Send message Joined: 25 Jan 11 Posts: 179 Credit: 83,858 RAC: 0 |
Those projects award too much credit too and they should decrease their credit awards. Sixtrack gives the proper amount. |
Send message Joined: 16 May 11 Posts: 79 Credit: 111,419 RAC: 0 |
It is kind of important. Who regulates what should be second of compute worth? I have changed the algotithm of assignment to take the average of all the valid contributions. For the rest, the default boinc assignments are kept. The actual valuation could also be dependent on boinc version. I don't mind to multiply the calcuated figure by a factor. If admins on the other projects do the same we will end up in an escalating situation which makes no sense. Proper advice is needed. skype id: igor-zacharov |
Send message Joined: 3 Dec 05 Posts: 11 Credit: 992,093 RAC: 0 |
After reading this thread, I agree that the amount of credit that LHC awards for work appears to be lower than other projects. I do not crunch for credit... I crunch because I have an interest in what the project is doing. Getting credit for donating my resources to a project is nice, and getting more credit is nicer, but, the amount of credit granted is NOT a consideration for me. I do see something that I think may have a bearing on the credit issue. If I understand correctly, the credit system at LHC appears to be based on computing time. My personal feeling is that it might be better to award credit based on worked performed. I make this suggestion based on what I have seen between the different computers that I have crunching for various projects, and the credit granted for WU's of approximately the same size. Let me see if I can explain what I'm thinking without creating too much confusion. :-) Computer 'A' performs 1000 operations per second. Computer 'B' performs 10 operations per second. A WU arrives requiring 5000 operations to complete, and the project will award 1 credit for every 10 seconds of work. If both computers get the same WU, when that WU is completed, computer A will get 0.5 credits, and computer B will get 50 credits. If the project granted 1 credit for every 1000 operations performed, then both computers A and B will get 5 credits granted. Did I get that right without embarrassing myself??? Now, I realize that things aren't quite that simple, and there are factors and circumstances that I haven't addressed in my example. But this example does point out what I feel is a weakness in a time based credit structure. |
Send message Joined: 17 Feb 07 Posts: 86 Credit: 968,855 RAC: 0 |
Credits based on work performed as J Henry Rowehl is indeed better. In my opinion the performance of the pc must be depending for the credits, so the time it needs to complete a WU. When and old pentium 3, got free of charge from a company runs a WU in 23 hours, and a new Xeon 6 core does it in 12 minutes (the same type and sort WU) then that should get a much higher credit. Same is with GPU crunching; to get result within minutes you need to have graphis cards of several hundred dollars, they need a big power supply, the need special mother boards, so such a PC cost a few thousend dollar. Only for crunching? Well then it is fair that it gets a lot more credits. Which is also fair as it helps science very well by returning a lot of calculation fast, so the science teams can use that for further processing. That is the way I see it, and although I have milions of credit over all BOINC projects, I don't bother. If I like the science, the use of it, and the team around it, then I crunch for it. Greetings from, TJ |
Send message Joined: 25 Jan 11 Posts: 179 Credit: 83,858 RAC: 0 |
It is kind of important. Who regulates what should be second of compute worth? If the default BOINC assignments are kept then you are using CreditNew which is the new credit system Dave Anderson has implemented. Most recently started BOINC projects are using CreditNew and others will switch to CreditNew in the future. Some projects are offering ridiculously high credits to attract lots of crunchers and that is not acceptable. You can multiply the credits by some factor if you want but then you would just continue the practice of using credits to attract crunchers. The days of whoring for credits is nearly over. Dave will be twisting arms, gently, to convince rogue projects to use CreditNew. If they don't comply willingly there are measures he can take to force projects to use CreditNew. We are CreditNew. You will be assimilated. Resistance is futile. |
Send message Joined: 24 Jul 05 Posts: 1 Credit: 973,410 RAC: 0 |
It is kind of important. Who regulates what should be second of compute worth? You could check with admins at Climate Prediction on how they adjust the credits. A year or so ago, they retroactively adjusted everyone's credit by 10% after they realized that they were crediting too low as they instituted a new higher credit factor. And CPDN is one of the most conservative credit granters. |
Send message Joined: 17 Sep 04 Posts: 40 Credit: 293,269 RAC: 0 |
If I may comment. Way back when BOINC started, I argued with those against the implementation of the credit awarding scheme - basically as it is ridiculous to compare across projects and will lead to considerable animosity. I should have argued for extreme animosity. I thought, if anything was to be chosen then it should be for work units correctly completed with no cross-project comparisons. As what is, becomes case, making all projects equal, will only result in driving more people away. As is, there are only a few more than 200,000 people who let their computers run for science out of more than 2,000,000 who have experimented with it. I think the argument, now, should be made for the good of the planet. Award more credits to those computers who complete a work unit quickly. Have a median time and award fewer credits to computers who take longer than the median time. This would encourage users to upgrade their computers to newer, more energy efficient, models, thereby aiding the planet by reducing the rate of global warming. (grinning) -ChinookFöhn |
Send message Joined: 2 Sep 04 Posts: 209 Credit: 1,482,496 RAC: 0 |
Igor wrote: It is kind of important. Who regulates what should be second of compute worth? My question is are you using CreditNew or some function that was imported from the old site ? |
Send message Joined: 17 Sep 10 Posts: 1 Credit: 141,559 RAC: 0 |
In addition to worthless credits, LHC could offer something else as a reward. I'd love to have some of the graphics of what happens inside the LHC as wall paper for my desk top. These should be available to only LHC crunchers for the exclusivity. The more units crunched the more wall papers available for downloading. |
Send message Joined: 22 Oct 04 Posts: 1 Credit: 325,683 RAC: 0 |
Rather than credits or wall papers etc. how about just using some of the LHC faster than light neutrinos to just send me the winning lotto numbers for the next draw? |
Send message Joined: 3 Dec 05 Posts: 11 Credit: 992,093 RAC: 0 |
Wow, I didn't realize that I'd be opening up such a can of worms with the credit issue! Anyway, I thought I'd give a little more of an idea as to what I think might be a good method for figuring out the credit to grant for crunching. I know that the subject of credit comes up quite a lot on a lot of projects, and the question becomes how to fairly grant credit for work done. My belief is that the project administrators should be able to determine how much they want to 'pay' for a volunteer's computer resources. Yes, there should be some type of limit set, but the actual credit grant should be based on the value of the returned result as it applies to the science being done. I read the article on 'creditnew', and to be perfectly honest, about 90 percent of it shot right past me at about warp 7. I saw the parts about computer speed, processing time, etc. But I have to ask, why is the CPU benchmark given in flops? On the server status pages, why is the available computing power given in Gflops and Tflops? When a project page shows the 'user of the day', why does it say 'Person X is contributing Y Gflops'? And before I go too much further, yes, I realize that 'flops' is floating point operations per second, second being a measure of time. So... let me ramble on with my thoughts. The project administrators decide that the science returned in a given type of work unit is worth 5 points to them. The work unit is sent out, and when returned with a successful result, 5 points are granted. In the event that a work unit can be returned with several valid end points, such as the LHC work units that can complete successfully even if the particle crashes into a wall, the amount of granted credit can be scaled based on the percentage of the work unit that was processed. In other words, if 20 percent of the WU was processed, then 1 point is granted, 50 percent gets 2.5 points, and so on. I am going to use my previous example, but modify it so that computer A does 10 Gflops, computer B does 1 Gflops, and add a computer running on a GPU that is capable of 150 Gflops. One of my LHC work units shows an estimated task size of 30,000 Gflops. For this WU, Computer A should take 3,000 seconds (50 minutes) to run the job, Computer B should take 30,000 seconds, (about 8.3 hours), and computer C should take 200 seconds (about 3.3 minutes). When each computer completes 30,000 Gflops of work, that computer is 'paid' 5 points, regardless of how long it takes. Simple enough for me. Now, what about those who think the faster computer should get more points for completing the work faster? Or, what about those who think that the slower computer should get more points because it spent more time crunching? I'd be willing to bet that if all the BOINC projects stopped granting points altogether, we would find out relatively quickly who was crunching for the warm fuzzy feeling of contributing something worthwhile to science, and who was crunching solely to get the most points. Another example - I have several desks in an office that need to be moved. Each desk weighs 100 pounds, and needs to be moved 300 feet, or, I need 30,000 foot pounds of work done for each desk. And, I'm going to pay 5 dollars to move each desk. Amazingly enough, that's the same number of estimated Gflops for an LHC WU! And the same number of points I used in my example! I'm goooood! :-) Three people agree to do the work, and head over to the office to start moving the desks. Now for the fun part... Person A came back in less than an hour, got paid, and was happy. Person B came back 8 and a half hours later and wanted more money because he spent more time moving the desk. Person C came back in less than 5 minutes and wanted more money because he moved the desk so quickly. Does this sound eerily familiar? The people running the faster computers DO get more points. Using the three computers from my example above, computer A, at 10 Gflops, can crunch 28.8 WU's/Day, for 144 points. Computer B, at 1 Gflops, can crunch 2.88 WU's/Day, for 14.4 points. And computer C, at 150 Gflops, can crunch 432 WU's/Day, for 2160 points. |
Send message Joined: 25 Jan 11 Posts: 179 Credit: 83,858 RAC: 0 |
Wow, I didn't realize that I'd be opening up such a can of worms with the credit issue! What makes you think you opened the can? It's been open for a long time. Anyway, I thought I'd give a little more of an idea as to what I think might be a good method for figuring out the credit to grant for crunching. Well, I read to the end of your laborious missive and you gave some goals/guidelines, none of them new, but never came close to outlining a method for achieving those goals. There's tons of ideas and precious little code that works. Code talks, ideas walk. My belief is that the project administrators should be able to determine how much they want to 'pay' for a volunteer's computer resources. This is typical American style laissez-faire capitalist philosophy. The only thing is that here in BOINC world admins pay absolutely nothing for the credits they dole out so laissez-faire capitalist economic dogma doesn't apply here. Yes, there should be some type of limit set Now you contradict what you just said about it being the admin's decision. but the actual credit grant should be based on the value of the returned result as it applies to the science being done. Show me 1000 different admins and I'll show you 1000 different evaluations of what the value of a returned result is and 1000 evaluations of how that applies to the science being done. You're talking things that nobody can put numbers on, things that can't be quantified. Computers don't compute pie in the sky and pipe dreams. They compute numbers. I read the article on 'creditnew', and to be perfectly honest, about 90 percent of it shot right past me at about warp 7. I saw the parts about computer speed, processing time, etc. But I have to ask, why is the CPU benchmark given in flops? On the server status pages, why is the available computing power given in Gflops and Tflops? When a project page shows the 'user of the day', why does it say 'Person X is contributing Y Gflops'? And before I go too much further, yes, I realize that 'flops' is floating point operations per second, second being a measure of time. Well, if you had any understanding of how computers work you wouldn't have to ask why benchmarks are given in flops. Nobody's going to give you a meaningful tutorial on that here. Read. Find out about machine cycles, instructions and operations. So... let me ramble on with my thoughts. You're talking about a deterministic algorithm, an algorithm whose progress can be determined prior to running or counted while running. That's not always possible. There is no way to determine if 20 percent was processed, 50 percent or whatever. And who can say whether a task that runs to full completion is worth 5 points or 10 points? where did you get that number? Why not 10 points or 100 points or just 1 point. How does one determine the value of a task? I am going to use my previous example, but modify it so that computer A does 10 Gflops, computer B does 1 Gflops, and add a computer running on a GPU that is capable of 150 Gflops. One of my LHC work units shows an estimated task size of 30,000 Gflops. For this WU, Computer A should take 3,000 seconds (50 minutes) to run the job, Computer B should take 30,000 seconds, (about 8.3 hours), and computer C should take 200 seconds (about 3.3 minutes). When each computer completes 30,000 Gflops of work, that computer is 'paid' 5 points, regardless of how long it takes. Simple enough for me. All that amounts to is equal pay for equal work, a principle long established and accepted by most. Your next example is just a laborious additional illustration of the same principle. The argument is about how to determine the point value of the task. The problem with letting project admins determine how much they pay for their tasks is that certain greedy low life admins think their research is more important than everybody else's research. They pay huge amounts of credit so they can attract more crunchers. And their strategy works. Well, that is NOT the vision Dave Anderson had when he created BOINC and he isn't going to tolerate what's been going on at those rogue projects much longer. They're either going to use CreditNew or they're out of the game. Who's going to determine what a task is worth? Dave Anderson is going to do that and you're all going to like it or get out of Dodge. The days of whoring for credits is nearly over. THAT is what is in the can of worms you think you opened. We are CreditNew. You will be assimilated. Resistance is futile. |
Send message Joined: 16 May 11 Posts: 79 Credit: 111,419 RAC: 0 |
we are on boinc version 6.11.0 with validator for sixtrack rewritten from the sample_bitwise_validator. The credit is calculated by returning the function stddev_credit from the compute_granted_credit in the validator. I believe we don't calculate any credit in sixtrack (will verify with Eric McIntosh). Therefore, the average is based on the client software claimed credit. skype id: igor-zacharov |
Send message Joined: 2 Sep 04 Posts: 209 Credit: 1,482,496 RAC: 0 |
we are on boinc version 6.11.0 with validator for sixtrack rewritten from I've been trying to read the documentation and understand this myself. _ I cound not find this exactly. But to me, if you used the latest version and sample supplied with that, that is good enough for me. === Here are some points from the documentation, I can't find "CreditNew" exactly as other people are refering to, but there is a secion called "The third New credit system", I assume this is it. From the documentation: BOINC estimates the peak FLOPS of each processor. For CPUs, this is the Whetstone benchmark score Application performance depends on other factors as well, such as the speed of the host's memory system. So a given job might take the same amount of CPU time on 1 GFLOPS and 10 GFLOPS hosts. The efficiency of an application running on a given host is the ratio of actual FLOPS to peak FLOPS However, application efficiency is typically lower, 50% for CPUs. ==To me the above lines say that if credit is low, that the application is inefficient. Goals of the new (third) credit system •Completely automated - projects don't have to change code, settings, etc. •Device neutrality •Limited project neutrality: different projects should grant about the same amount of credit per host-hour, averaged over hosts. [I snipped out the GPU stuff] Point 3 is only good if other projects are using this new method, so users should encourage projects to stop whatever process they use and use the same process (ie CreditNew), this is the only way credit will be on a level playing field. It will also end these discussions, as it leaves the credit granted all to the same formula and leaves it out of the hands of the project (as it say project neutrality), ie they stop granting whatever non-related number they want. |
Send message Joined: 3 Oct 06 Posts: 101 Credit: 8,994,586 RAC: 0 |
This is a nice discussion, but with to match letters for translation – sorry, if I am repeating something. :-) Just for fun – 1 computer, equipped with 4 NVIDIA GeForce GTX 570 in 5 months earned in PrimeGrid approximately the same amount of credits as all hosts of all users in LHC@home, since this project started. BOINC combined always was formal, but today it is a joke if not absurdity. IMHO, it is enough for the correct or adequate statistic here - ONE formula for all and the cheaters are blocked. |
Send message Joined: 24 Nov 06 Posts: 76 Credit: 7,953,478 RAC: 0 |
Cross-project credit equality is impossible. Full stop. CreditNew does not solve it, and is pretty much a random number generator. CreditNew should be avoided at all costs, IMO. Dublin, California Team: SETI.USA |
Send message Joined: 28 Nov 05 Posts: 31 Credit: 115,957 RAC: 0 |
Cross-project credit equality is impossible. Full stop. +1 Don't know why people get so hung up over credit. Not much you can do with it. They don't accept it in payment at my local supermarket. Some people have more of an utterly worthless thing than others. Awesome. As for "fairness" maybe we should ban GPUs - They produce too much credit compared to CPUs. Just look at the top RAC projects - All GPU - Unfair! ;) This credit chestnut comes up time after time. Maybe credit should only be awarded when you have found a new megaprime, pulsar, cure for cancer etc ;) It's just a fun thing. I see no evidence of "credit wars" or "flocking". I suspect the most avid credit-cops believe in faeries and unicorns. Bless. Al. |
©2025 CERN