Message boards :
Number crunching :
Credits
Message board moderation
Author | Message |
---|---|
Send message Joined: 16 May 11 Posts: 79 Credit: 111,419 RAC: 0 |
following the analysis of the credits given for the long jobs of last 2 weeks we have decided to give additional credits based on our internal accounting system. There were several problems exposed due to your help. Most of it due to the choice using crediting based on real time. This had several consequences: 1) the real time credit was implemented with a cut at 10 hours. Therefore running long jobs did not get the due credits 2) as discussed already in the forum, old boinc clients do not report real time. Therefore, if the old client was the canonical result all results will get zero credits. 3) clever people implement Anonymous Platform with an artificially high performance value assigned. A slow platform will get dis-proportionally more credits, because the credit is calculated as time * platform_performance. We have analyzed what the credits would be like when using our internal accounting system based on sixtrack reported values. For each host - if our credit system would give more credit - we have build up an update table, which was applied to the data base. If our system would give less credit, we have not touched the assigned values. For most people it gives few 1000 more points, for some it gives few 10000 more points. Please, look at your credits, if you care, and if you find problems, discrepancies or have comments, write to this thread or to my private inbox. I will be looking at the system on wednesday (22nd of august) again. Going forward, we are running with the default credit system. It seems to take care of the Anonymous Platform in a correct way and it assigns credit values similar to the internal sixtrack accounting. Please, report any thoughts or observations. Thank you for your support and patience in this matter. Igor. skype id: igor-zacharov |
Send message Joined: 12 Dec 05 Posts: 31 Credit: 9,709,398 RAC: 0 |
i could care less about the credits i earn, and would simply be content knowing that my machines are contributing to the science behind the project. that being said, i was curious enough to want to confirm this claim of adjusted credits...unfortunately enough time has since passed since i processed my last batch of long-running LHC tasks, and consequently they've all been wiped clean of the server database...so i cannot confirm that those 55-hour long tasks earned much more credit (if any) than the typical 4-5 hour tasks. |
Send message Joined: 18 Sep 04 Posts: 163 Credit: 1,682,370 RAC: 0 |
i could care less about the credits i earn, and would simply be content knowing that my machines are contributing to the science behind the project. I concur. Although giving credits according to sixtrack's internal accounting (thinking of #completed turns) would be nice. KR Michael Team Linux Users Everywhere |
Send message Joined: 14 Jul 05 Posts: 3 Credit: 5,310,429 RAC: 126 |
If I recall correctly, all of my three active computers had long WUs. http://lhcathomeclassic.cern.ch/sixtrack/show_host_detail.php?hostid=9964580 http://lhcathomeclassic.cern.ch/sixtrack/show_host_detail.php?hostid=9960841 http://lhcathomeclassic.cern.ch/sixtrack/show_host_detail.php?hostid=9973599 The awarded points were apalling, but i can currently see only one as validated: http://lhcathomeclassic.cern.ch/sixtrack/workunit.php?wuid=2490815 There seems to be two WUs waiting for validation: http://lhcathomeclassic.cern.ch/sixtrack/workunit.php?wuid=2536373 http://lhcathomeclassic.cern.ch/sixtrack/workunit.php?wuid=2536459 I'm pretty sure there were more, because I was worried those WUs might not complete in time, but all eventually did. Compared to many of my countrymen, who have suddenly gotten huge amouts of points, I've only had an average amount. So it's possible I might be missing some points. Petri S |
Send message Joined: 19 Feb 08 Posts: 708 Credit: 4,336,250 RAC: 0 |
I got about 13 credits/hour for a 166 hour task. It seems reasonable. Tullio |
Send message Joined: 14 Jul 05 Posts: 3 Credit: 5,310,429 RAC: 126 |
I decided to investigate some more. I looked at Free-DC's "LHC@Home 1.0 Users Stats" and saw that the "Last Update" column had credits awarded after the regular daily update. I assume that currently (when I write this) it indicates the additionally awarded credit. I'm going to amuse all of you who have written to this thread by showing your and my "Last 28 Days" and "Last Update" credits shown by Free-DC (I hope you don't mind): Sunny129: _ _ _ _ _ 94595_ _ 22877 Michael Karlinsky:_ 18932_ _ 3716 tullio: _ _ _ _ _ _ 3356 _ _ 2493 Igor Zacharov:_ _ _ 2385 _ _ 347 Petri S:_ _ _ _ _ _ 154342 _ 201 Have I won the lottery? ;) Petri S |
Send message Joined: 7 Apr 12 Posts: 1 Credit: 45,653 RAC: 0 |
I don't pay any attention to credits, it's not why I contribute CPU time. I do get a kick out of seeng how many operations my computer has performed. I didn't even know how many zeros were in a quadrillion until I had that many computations. Am looking forward to a quintillion. Jon Bennett |
Send message Joined: 22 Jul 05 Posts: 72 Credit: 3,962,626 RAC: 0 |
... Please, look at your credits, if you care, and if you find problems, discrepancies or have comments, write to this thread or to my private inbox. I will be looking at the system on wednesday (22nd of august) again. I'm not concerned about credit but I am interested in pointing at examples where there seems to be a discrepancy with what is supposed to have happened. If I understand you correctly, your intention was that the credit for long running tasks (perhaps ONLY those that still remain in the online database) should have been reviewed and adjusted upwards by a factor of approximately 10 above what would normally be expected for a standard 1M turn task. For many of my hosts, 1M turn tasks take around 12K - 18K secs and are usually awarded credit in the range of 90 - 180 or thereabouts. On that basis, I would expect 10M turn tasks that run the full distance to take around 120 - 180Ksecs (ie around 40 hours) and then be awarded in excess of 1000 credits on average. I've had a look at this particular tasks list for one of my hosts. I found the three oldest entries that must have been 10M tasks because the run times are in excess of 100Ksecs. The credits for these three are 234.78, 1190.23 and 234.82. The run times are 151,176.70, 182,475.70 and 100,447.60. The WUIDs are 2503410, 2531308 and 2515332. I've recorded all this in case they get deleted shortly. My intention is simply to report the fact that out of three long running tasks that are still in the database, only one seems to have been identified and adjusted. I'm not expecting any particular action. I just thought you might be interested to know. Cheers, Gary. |
Send message Joined: 18 Sep 04 Posts: 163 Credit: 1,682,370 RAC: 0 |
Hi all, I have one long WU left, it's still waiting for a second reply. http://lhcathomeclassic.cern.ch/sixtrack/workunit.php?wuid=2525806 Let's see how many credits it gets. Michael Team Linux Users Everywhere |
Send message Joined: 27 Oct 07 Posts: 186 Credit: 3,297,640 RAC: 0 |
Since I was the one who suggested using 'credit from runtime' in the first place (/me ducks :P) - it seemed like a good idea at the time, to address a different problem - I've been keeping a log of credit awarded per hour accross five hosts. http://i1148.photobucket.com/albums/o562/R_Haselgrove/LHCcreditperhour.png You can see: Varying credit rate to the left, before the change was made Largely level rates until 31 July A break while I concentrated resources on the monthly SIMAP run The experimental long runs, roughly 5-12 August More level rates with the newer 'intensity scan' work More variable credit since 'credit from runtime' was turned off I've only bothered to save the data I can easily collect from task listings, 20 at a time. So the x-axis shows my reporting time, which may be very different from the validation time - I had one validated today, which was reported on (and will be graphed as) 05 August. The tasks with zero credit are simply Excel's way of interpreting the word 'pending' - i.e. no credit awarded yet. I didn't get any awarded an actual nul points. The very low-scoring - below 10 per hour - tasks are in some cases long runs, but in other cases very short runs also translate to low hourly rates - 1.4 credits for 7 minutes running, 2.5 credits for 9 minutes. Nobody should worry about those, I think. I've checked the valid results still visible online, against the original data I collected, and no individual task has been re-credited. However, I did get something like 10,000 more credits than usual in yesterday's stats ecport, so I think the bonus award has been done 'behind the scenes' in a way which doesn't show in the raw result tables. |
Send message Joined: 4 May 07 Posts: 250 Credit: 826,541 RAC: 0 |
Igor, Thank you for your efforts in sorting out the credits issue. I, for one, don't really put any emphasis on credits other than to know I'm doing some kind of work and can see some measure of results. I run the Project because I think it is helping the work at CERN. (Also running T4T).) Thanks again. Toim |
Send message Joined: 12 Dec 05 Posts: 31 Credit: 9,709,398 RAC: 0 |
*update* as i stated in my last post, so much time has passed since i completed my last long-running WU that its statistics have already been wiped from the database. however, i did get a pretty massive boost in credits according to BOINCstats today...22,877 points to be precise. just to give you an idea, my RAC is only 3,318 PPD. so i'm finally seeing the effects of the credit correction. |
Send message Joined: 16 May 11 Posts: 79 Credit: 111,419 RAC: 0 |
Petri, I guess you wonder why your credits in particular were not upgraded by much, althouth admitedly you did a lot of work. When designing the system, I did not look at user-ids at all, only at hosts and work-units. This is important, since the system must be objective in a sense. It calculates the (upgrade=#sixtrack_loops*Flops - given_credits) and only if the upgrade is positive applies the change. No user id is involved. But to analyze more, your hosts in particular use the 10 GF mark. Therefore, when you deliver the canonical result in the run-time credits calculation your credits soar. Like this you did get a large credit from your work already and there was no need to upgrade. I hope this clarifies. I took your example as an opportunity to explain the strategy. I feel it is important that everybody understands we do not want to break the system that inventors of BOINC put together. It should be fair on all projects. We just correct our own mistakes. Igor. I decided to investigate some more. skype id: igor-zacharov |
Send message Joined: 14 Jul 05 Posts: 3 Credit: 5,310,429 RAC: 126 |
Igor, Thank you for responding. Your response raised a couple of further questions. I know you are currently quite busy, so I try to be as brief as possible. When you say: But to analyze more, your hosts in particular use the 10 GF mark. Is it somehow related to this: 3) clever people implement Anonymous Platform with an artificially high performance value assigned. A slow platform will get dis-proportionally more credits, because the credit is calculated as time * platform_performance. You say: Therefore, when you deliver the canonical result in the run-time credits calculation your credits soar. Like this you did get a large credit from your work already and there was no need to upgrade. I'm not a native English speaker, so I'm reading this in two possible ways: 1) When one of my hosts report a WU and it's validated, the credits awarded for that WU are already large. For example that long WU validated from my first message: http://lhcathomeclassic.cern.ch/sixtrack/workunit.php?wuid=2490815 It had 155.83 credits awarded, so that is already large enough. 2)When one of my hosts report a WU and it's validated, the credits awarded for that WU are multiplied by a host specific number and so are already larger than is indicated. The same example: http://lhcathomeclassic.cern.ch/sixtrack/workunit.php?wuid=2490815 It had 155.83 credits awarded, but in reality I have already been given more than that. Which is correct? Petri S |
Send message Joined: 16 May 11 Posts: 79 Credit: 111,419 RAC: 0 |
Petri, the WU#2490815 you indicate as example is 10M turns and was running 131801 seconds on your machine and 635141 seconds on your wingman. Your wingman was elected to deliver the canonical result, therefore for both of you the credit was calculated: 2012-08-16 06:10:18.4368 [WU#2490815][RESULT#5464810] credit_from_runtime 155.83 = 36000s * 1.87GFLOPS You see the caping here at 10 hours (3600 seconds) and it is our mistake. The assignment itself was done based on your wingman computer speed. Here is an example of WU#1985443 with 1M turns where your computer was elected to deliver the canonical result: 2012-07-24 04:44:26.0601 [WU#1985443][RESULT#4393043] credit_from_runtime 811.37 = 35051s * 10.00GFLOPS Here both, you and your wingman were assigned a credit of 811 where a "normal" around 150 would be appropriage. When applying the corrections I looked over the period of 3 weeks (from 23/7 till 16/08 with 1M and 10M turns jobs) and summed up all contributions. For your amusement, here is the debug of the credit calculation for your host: [RESULT#5887203] raw credit: 367.66 (15883.05 sec, 10.00 est GFLOPS) [RESULT#5887203] anon platform, scaling by 0.352973 (0.25/0.71) [RESULT#5887203] anon platform, returning 129.78 [RESULT#5887203] updating HAV PFC 0.88 et 8.82392e-11 turnaround 56011 [RESULT#5887203] get_pfc() returns credit 129.775 mode approx your wingman: [RESULT#5887204] raw credit: 67.15 (23261.86 sec, 1.25 est GFLOPS) [RESULT#5887204] [AV#61] normal case. 23262 sec, 1.2 GFLOPS. raw credit: 67.15 [RESULT#5887204] host scale: 1.66 (0.213730/0.128826) [RESULT#5887204] applying app version scale 1.171 [RESULT#5887204] [AV#61] PFC avgs with 0.16116 (2.90088e+13/1.8e+14) [RESULT#5887204] updating HAV PFC 0.16 et 1.29233e-10 turnaround 85161 [RESULT#5887204] get_pfc() returns credit 130.483 mode normal [WU#2685431] assign_credit_set: credit 130.483 thus the normal for your and your wingman's host would be 130 in this case, but: [WU#2685431][RESULT#5887203] credit_from_runtime 367.66 = 15883s * 10.00GFLOPS [RESULT#5887203 wlxscan_wcbb6_....._sixvf_boinc743_0] Valid; granted 367.663303 credit [HOST#9964580] We are now set to apply the default credit calculation, so you don't have to do anything. This is just for your understanding and amusement. skype id: igor-zacharov |
Send message Joined: 23 Feb 06 Posts: 1 Credit: 1,866,360 RAC: 0 |
I am not concerned at all. But... this is some change you applied for just a day? OMFG, I got more credit in a day that in a month!! My computer's task are here: http://lhcathomeclassic.cern.ch/sixtrack/results.php?hostid=9945382 And I can't see at all what happened... An annoying task was one that is not appearing any more. With more than a hundred hours... I had to put my laptop in "I'm going to consume as much energy as I can" mode, so I could finish in date, while I am usually in "save energy" mode, in which my laptop heats a lot less. And I received... 87 credits I think... man, I don't care, but that was not fair =D See ya! EDIT: The numbers are not in the picture. I went from 8500 to 13500 in just a day, while, as you can see for the slope... is quite a lot! |
Send message Joined: 18 Sep 04 Posts: 163 Credit: 1,682,370 RAC: 0 |
Hi all, That one validated and 1,319.86 credits were issued for 172,315.40 CPU s; my result is the canonical result. Too lazy to do the maths. Michael Team Linux Users Everywhere |
Send message Joined: 4 May 07 Posts: 250 Credit: 826,541 RAC: 0 |
Just finished three long WU and the credit is almost nonexistent. Here is one of the tasks http://lhcathomeclassic.cern.ch/sixtrack/workunit.php?wuid=3241806 and here is another http://lhcathomeclassic.cern.ch/sixtrack/workunit.php?wuid=3238744 I have to say that I am NOT a credit hound but I would have thought the number of CPU seconds would have resulted in higher credit. Or is this the new credit calculation that is being used? I know each Project has their own credit scheme but here is what I get on SETI http://setiathome.berkeley.edu/workunit.php?wuid=1035792932 Tom |
Send message Joined: 27 Oct 07 Posts: 186 Credit: 3,297,640 RAC: 0 |
1) Those are 'wlxscan' tasks - normal length for this project, not the long 'wlxu2' variety. 2) You should be comparing runtime (total elapsed working time), not CPU time. 3) Especially, since the SETI task you linked was computed on your GTS 450 Fermi GPU - the CPU will have contributed practically nothing. |
Send message Joined: 10 Sep 08 Posts: 6 Credit: 6,350,253 RAC: 0 |
I wish I had as high a credit as you Tom. ;-) I get 0.84! http://lhcathomeclassic.cern.ch/sixtrack/workunit.php?wuid=3208867 |
©2025 CERN