Message boards : Theory Application : New version 263.90
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · Next

AuthorMessage
Erich56

Send message
Joined: 18 Dec 15
Posts: 1686
Credit: 100,482,786
RAC: 104,432
Message 39791 - Posted: 2 Sep 2019, 4:30:18 UTC - in response to Message 39790.  

No Comments?
the allocation of credit points has always been a big conondrum. I have brought up this topic several times, but never ever got a logical explanation. It seems to me that no one can really explain it.
ID: 39791 · Report as offensive     Reply Quote
Harri Liljeroos
Avatar

Send message
Joined: 28 Sep 04
Posts: 674
Credit: 43,168,451
RAC: 16,096
Message 39792 - Posted: 2 Sep 2019, 6:59:41 UTC - in response to Message 39791.  

No Comments?
the allocation of credit points has always been a big conondrum. I have brought up this topic several times, but never ever got a logical explanation. It seems to me that no one can really explain it.

The credits are somehow tied to the web preference of number of CPUs to use for a multi threaded task. If you increase your number of CPUs on the web site but use app_config to limit the number of CPUs actually used during calculations, you will get higher credits for the task. At least this is my experience for Theory tasks.
ID: 39792 · Report as offensive     Reply Quote
Crystal Pellet
Volunteer moderator
Volunteer tester

Send message
Joined: 14 Jan 10
Posts: 1268
Credit: 8,433,416
RAC: 3,056
Message 39793 - Posted: 2 Sep 2019, 8:12:53 UTC - in response to Message 39792.  
Last modified: 2 Sep 2019, 14:26:50 UTC

the allocation of credit points has always been a big conondrum. I have brought up this topic several times, but never ever got a logical explanation. It seems to me that no one can really explain it.

The credits are somehow tied to the web preference of number of CPUs to use for a multi threaded task. If you increase your number of CPUs on the web site but use app_config to limit the number of CPUs actually used during calculations, you will get higher credits for the task. At least this is my experience for Theory tasks.

That's correct. Example task: https://lhcathome.cern.ch/lhcathome/result.php?resultid=244420894

A 12 core machine. In preferences the Max # CPUs is set to no limit, so the Device peak FLOPS = 12 * Measured floating point speed (benchmark) of 3.89 GFLOPS makes 46.66 GFLOPS for credit calculation,
but in app_config.xml it is setup to create single core VM's. So the credit is 12 times too high. App_config.xml is and has always been a local instrument for the user of BOINC, where the server is not aware of.
ID: 39793 · Report as offensive     Reply Quote
Luigi R.
Avatar

Send message
Joined: 7 Feb 14
Posts: 99
Credit: 5,180,005
RAC: 0
Message 39796 - Posted: 2 Sep 2019, 11:50:11 UTC

ID: 39796 · Report as offensive     Reply Quote
Crystal Pellet
Volunteer moderator
Volunteer tester

Send message
Joined: 14 Jan 10
Posts: 1268
Credit: 8,433,416
RAC: 3,056
Message 39798 - Posted: 2 Sep 2019, 14:35:43 UTC - in response to Message 39796.  

1 CPU server-side.

https://lhcathome.cern.ch/lhcathome/workunit.php?wuid=123161593

Come on!
I suppose, you are not satisfied with the credit ??

The measured floating point speed of 0.87 billion ops/sec is that correct?

Run CPU benchmarks from BOINC Manager's Tools menu and system not used for the rest.
ID: 39798 · Report as offensive     Reply Quote
Erich56

Send message
Joined: 18 Dec 15
Posts: 1686
Credit: 100,482,786
RAC: 104,432
Message 39799 - Posted: 2 Sep 2019, 14:45:41 UTC - in response to Message 39798.  

[quote]I suppose, you are not satisfied with the credit ??
hihi, with the value 97,40 I wouldn't be satiesfied either :-)
even my old old old AMD Turion Dual-Core ZM-80 (contained in a notebook) produces more than 200 points within about same processing time.
ID: 39799 · Report as offensive     Reply Quote
Luigi R.
Avatar

Send message
Joined: 7 Feb 14
Posts: 99
Credit: 5,180,005
RAC: 0
Message 39801 - Posted: 2 Sep 2019, 15:01:53 UTC - in response to Message 39798.  
Last modified: 2 Sep 2019, 15:13:48 UTC

The measured floating point speed of 0.87 billion ops/sec is that correct?

Run CPU benchmarks from BOINC Manager's Tools menu and system not used for the rest.
It's the correct value at 1.4-1.6GHz and, yes, my notebook is actually set at 1.2GHz, sometimes at 1.8GHz to speed up other tasks.

Here is my benchmark results at various frequencies (on November 2018).

freq    ( flo/  int)
 800MHz ( 452/ 3048)
1000MHz ( 565/ 3831)
1200MHz ( 679/ 4649)
1400MHz ( 792/ 5413)
1600MH  ( 905/ 6188)
1800MHz (1017/ 6942)
2000MHz (1131/ 7727)
2200MHz (1243/ 8484)

Previously mesaured floating point speed was 18.97 GFLOPS that has always been impossible.
https://lhcathome.cern.ch/lhcathome/workunit.php?wuid=122897843
So why credits should depend on that value now?
ID: 39801 · Report as offensive     Reply Quote
Luigi R.
Avatar

Send message
Joined: 7 Feb 14
Posts: 99
Credit: 5,180,005
RAC: 0
Message 39802 - Posted: 2 Sep 2019, 15:09:13 UTC - in response to Message 39799.  

[quote]I suppose, you are not satisfied with the credit ??
hihi, with the value 97,40 I wouldn't be satiesfied either :-)
even my old old old AMD Turion Dual-Core ZM-80 (contained in a notebook) produces more than 200 points within about same processing time.

The free ride is over for me. :(
I was using my notebook only because credits were good.
Daily credits were 3700/thread independently of CPU frequency.
ID: 39802 · Report as offensive     Reply Quote
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2386
Credit: 223,039,416
RAC: 136,902
Message 39803 - Posted: 2 Sep 2019, 15:31:17 UTC - in response to Message 39801.  

The benchmark results seem to be valid for that CPU.

Recent linux kernels usually activate a couple of mitigation settings against various malware at the expense of (much) lower benchmark results.
As the benchmark results are used for credit calculation it will result in lower credits.

OTOH BOINC's credit calculation includes some components to identify outliers or cheats.
As a result the credit reward for a single task should not be treated as stable.
It will need at least a week without any setup changes to get stable values.


CP already explained another major factor that influences multicore apps.
ID: 39803 · Report as offensive     Reply Quote
Jim1348

Send message
Joined: 15 Nov 14
Posts: 602
Credit: 24,371,321
RAC: 0
Message 39804 - Posted: 2 Sep 2019, 15:58:17 UTC - in response to Message 39803.  

Recent linux kernels usually activate a couple of mitigation settings against various malware at the expense of (much) lower benchmark results.
As the benchmark results are used for credit calculation it will result in lower credits.

I suppose that is the speculative execution problem (Spectre/Meltdown) affecting the Intel CPUs.
So do the benchmarks change for the AMD CPUs?
ID: 39804 · Report as offensive     Reply Quote
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2386
Credit: 223,039,416
RAC: 136,902
Message 39805 - Posted: 2 Sep 2019, 16:16:16 UTC - in response to Message 39804.  

Recent linux kernels usually activate a couple of mitigation settings against various malware at the expense of (much) lower benchmark results.
As the benchmark results are used for credit calculation it will result in lower credits.

I suppose that is the speculative execution problem (Spectre/Meltdown) affecting the Intel CPUs.
So do the benchmarks change for the AMD CPUs?

AMD CPUs are also affected.
See:
https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/index.html
ID: 39805 · Report as offensive     Reply Quote
Jim1348

Send message
Joined: 15 Nov 14
Posts: 602
Credit: 24,371,321
RAC: 0
Message 39806 - Posted: 2 Sep 2019, 16:48:31 UTC - in response to Message 39805.  
Last modified: 2 Sep 2019, 16:54:24 UTC

It appears that it could affect different projects differently. I look mainly at the execution times, and see no obvious difference thus far for LHC on my i7-9700. But I could go to my Ryzens if necessary.
https://www.extremetech.com/computing/291649-intel-performance-amd-spectre-meltdown-mds-patches

It is another juggling act we have to do.
ID: 39806 · Report as offensive     Reply Quote
Erich56

Send message
Joined: 18 Dec 15
Posts: 1686
Credit: 100,482,786
RAC: 104,432
Message 39863 - Posted: 8 Sep 2019, 11:33:51 UTC - in response to Message 39790.  

On September 2nd, Maeax wrote:
There are Tasks from some Computer with more than 5k Points:
https://lhcathome.cern.ch/lhcathome/results.php?hostid=10555784&offset=0&show_names=0&state=4&appid=13 ...
this is a task from one of my machines. Out of thousands of tasks one can find here from thousands of volunteers, why did you pick exactly this one? Do you have a problem with my tasks, or with me personally? The latter one could assume, reading some of your replies to my postings in other threads here.
ID: 39863 · Report as offensive     Reply Quote
Luigi R.
Avatar

Send message
Joined: 7 Feb 14
Posts: 99
Credit: 5,180,005
RAC: 0
Message 39864 - Posted: 8 Sep 2019, 15:23:49 UTC - in response to Message 39803.  

The benchmark results seem to be valid for that CPU.
[...]
OTOH BOINC's credit calculation includes some components to identify outliers or cheats.
As a result the credit reward for a single task should not be treated as stable.
It will need at least a week without any setup changes to get stable values.
Credit drop does not look benchmark-related.

Now my notebook gets 477cr/day per thread, almost exactly 1/n_threads of previous amount.
I can't accept this when there are still hosts that daily get more than ~10k per thread, so I turned it off.
ID: 39864 · Report as offensive     Reply Quote
maeax

Send message
Joined: 2 May 07
Posts: 2071
Credit: 156,192,791
RAC: 103,819
Message 39869 - Posted: 8 Sep 2019, 17:28:17 UTC - in response to Message 39863.  

this is a task from one of my machines. Out of thousands of tasks one can find here from thousands of volunteers, why did you pick exactly this one? Do you have a problem with my tasks, or with me personally? The latter one could assume, reading some of your replies to my postings in other threads here.

Fairplay is only for the other Volunteers, Erich56?
ID: 39869 · Report as offensive     Reply Quote
Crystal Pellet
Volunteer moderator
Volunteer tester

Send message
Joined: 14 Jan 10
Posts: 1268
Credit: 8,433,416
RAC: 3,056
Message 39871 - Posted: 8 Sep 2019, 19:44:36 UTC - in response to Message 39869.  

this is a task from one of my machines. Out of thousands of tasks one can find here from thousands of volunteers, why did you pick exactly this one? Do you have a problem with my tasks, or with me personally? The latter one could assume, reading some of your replies to my postings in other threads here.

Fairplay is only for the other Volunteers, Erich56?

thousands of volunteers ?? - Active with Theory-vbox the last 24 hours: 112
Fairplay ?? - It are the project settings Max # of CPU's - 8 or No limit causing high credits when using an app_config with lower ncpus-values.
See my post with the wish/advice to get rid of the multi-core Theory application. Make it single core and volunteers could use app_config.xml to make it e.g. dual-core whilst on low RAM.

Pse be kind to each other ;)
ID: 39871 · Report as offensive     Reply Quote
Erich56

Send message
Joined: 18 Dec 15
Posts: 1686
Credit: 100,482,786
RAC: 104,432
Message 39872 - Posted: 8 Sep 2019, 20:29:20 UTC - in response to Message 39869.  

Fairplay is only for the other Volunteers, Erich56?
could you please further explain this weird accusation?
ID: 39872 · Report as offensive     Reply Quote
Toby Broom
Volunteer moderator

Send message
Joined: 27 Sep 08
Posts: 798
Credit: 644,856,397
RAC: 226,371
Message 39888 - Posted: 10 Sep 2019, 6:19:09 UTC

Can you adjust the memory calculations? the working set is calculated at 19GB for an unlimited/8 core task this seems a little excessive?
ID: 39888 · Report as offensive     Reply Quote
Crystal Pellet
Volunteer moderator
Volunteer tester

Send message
Joined: 14 Jan 10
Posts: 1268
Credit: 8,433,416
RAC: 3,056
Message 39890 - Posted: 10 Sep 2019, 8:01:14 UTC - in response to Message 39888.  

Can you adjust the memory calculations? the working set is calculated at 19GB for an unlimited/8 core task this seems a little excessive?

It could be more! Have you seen your Xeon E5-2696 (32 cores): Setting Memory Size for VM. (24750MB)
ID: 39890 · Report as offensive     Reply Quote
Toby Broom
Volunteer moderator

Send message
Joined: 27 Sep 08
Posts: 798
Credit: 644,856,397
RAC: 226,371
Message 39896 - Posted: 10 Sep 2019, 17:33:50 UTC - in response to Message 39890.  

I didn't check them all, I assume it's just doing cores on machine x nGB.

I would use the web config but it doesn't work great either for my machines, I just unlimited and mange it myself but the working set size is calculated from server side so I can't fix that myself.

I guess I could create a script to edit all the workunits and re-start BOINC every time a new WU is started....
ID: 39896 · Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · Next

Message boards : Theory Application : New version 263.90


©2024 CERN