Message boards :
Number crunching :
CPU-Power
Message board moderation
Author | Message |
---|---|
Send message Joined: 1 Sep 04 Posts: 506 Credit: 118,619 RAC: 0 |
<blockquote>It would be interesting to know how much CPU-power all LHC@Home-participants is generating? </blockquote> Taken from a posting by Ben Segal in another thread: <blockquote>Well, San, I can assure you that no Supercomputers at CERN are working on LHC accelerator beam tracking, and even if they were they would be much less productive than LHC@home which is delivering around 3 TeraFlops right now. The beam studies being done on LHC@home are serious production work. There is a lot more of them to be done over the next weeks. We will make some written results available in due course to our crunching community. Ben Segal / LHC@home </blockquote> Gaspode the UnDressed http://www.littlevale.co.uk |
Send message Joined: 16 Jul 05 Posts: 65 Credit: 369,728 RAC: 0 |
according to the most recent BOINC benchmark scores I have here on my screen, an AMD AthlonXP/Sempron at 2GHz equals around 1.89 GflOps and 3.19 GIOps an AMD AthlonXP/Sempron at 2.2GHz equals around 2.05 GflOps and 3.45 GIOps an Intel Xeon/P4 at 3.0GHz equals around 2.68 GflOps and 2.16 GIOps that puts my "farm" at a total of nearly 17 GFlOps and 22 GIOps, of which 75% is dedicated to LHC :) [edit] did a quick compare to Einstein@Home (they publish their estimated TFLOPS) LHC should be getting around 20TFLOPS from the active hosts [/edit] |
Send message Joined: 25 Oct 04 Posts: 19 Credit: 2,580 RAC: 0 |
lol and here I am with my silly celeron laptop 1.2 GHz Well, everything still counts in the end ;) |
Send message Joined: 16 Jul 05 Posts: 65 Credit: 369,728 RAC: 0 |
true. My problem is, I have a hard time tossing out old computers, so I just keep them around and upgrade them a bit from time to time. (or hand them down to dad when he starts complaining his computer is slow ;) but then I get his old one in return) btw, it would be nice to hear a more recent guestimate of the actual TFLOPS generated by the participants. The quote of 3 TeraFlops is over a month old, just after the sign-ups were opened up and my guestimate is a very rough one, based on the number of active hosts posted on the BoincStats website |
Send message Joined: 27 Jul 04 Posts: 182 Credit: 1,880 RAC: 0 |
Well, the last 24 hours we have run an average of 2.21 Tera flops. Chrulle Research Assistant & Ex-LHC@home developer Niels Bohr Institute |
Send message Joined: 16 Jul 05 Posts: 65 Credit: 369,728 RAC: 0 |
<blockquote>Well, the last 24 hours we have run an average of 2.21 Tera flops. </blockquote> sounds like roughly equivilant to an IBM cluster running 768 Opterons @ 2.0GHz seems a bit low for what, nearly 10.000 active hosts? |
Send message Joined: 16 Jul 05 Posts: 84 Credit: 1,875,851 RAC: 0 |
<blockquote><blockquote>Well, the last 24 hours we have run an average of 2.21 Tera flops. </blockquote> sounds like roughly equivilant to an IBM cluster running 768 Opterons @ 2.0GHz seems a bit low for what, nearly 10.000 active hosts?</blockquote> Don't forget, that the most hosts are slower and not running LHC 24/7. Linux Users Everywhere @ BOINC [url=http://lhcathome.cern.ch/team_display.php?teamid=717] |
Send message Joined: 22 Jul 05 Posts: 31 Credit: 2,909 RAC: 0 |
sounds like roughly equivilant to an IBM cluster running 768 Opterons @ 2.0GHz seems a bit low for what, nearly 10.000 active hosts? Not really. Let's assume that most hosts are running 3 projects, we're down to a bit over 3000 fulltime hosts, further at least half of them are slower than the opterons, takes us down to about 2000 hosts, and most of these machines runs the screensaver and so work only when the machine is not used by its owner, were down to around 1000 or less. All very rough estimates of course, but the final size sounds reasonable. |
Send message Joined: 16 Jul 05 Posts: 65 Credit: 369,728 RAC: 0 |
Desti, I'm not sure about the slower bit when looking at the CPU-distribution for the project klasm, it seems your assumptions are as rough as mine ;) Chrulle, when looking at the graphs over on BoincStats it seems the last 24 hours are not what I would call representative for the project, the "granted credit per day" average looks like it might actually be 3 times higher then the credit granted over the past few days (possibly due to the end of the previous batch and people switching to their "backup" projects?) would it be possible to, just for statistics sake, measure the averages across a longer period, like the Einstein project publishes on their server status page? |
Send message Joined: 13 Sep 05 Posts: 14 Credit: 1,606 RAC: 0 |
Guys, think there's also something to do with the mainboard chipset. I figured some users using same CPU as mine but generated more/less million flops/sec. Think that's something to do with the "CPU-power"? |
Send message Joined: 16 Jul 05 Posts: 65 Credit: 369,728 RAC: 0 |
<blockquote>Guys, think there's also something to do with the mainboard chipset. I figured some users using same CPU as mine but generated more/less million flops/sec. Think that's something to do with the "CPU-power"?</blockquote> the question is slightly "off-topic" in this thread, but.... BOINC reads the CPU-name, not the actual settings. So a machine reporting to be an XP 2500+ might actually be overclocked to something like XP 3200+ speeds or down-clocked a bit to serve as HTPC in someones livingroom (lower speed, lower vcore => less noise). Other then that, unattended benchmarks seem to yield higher scores then the benchmarks the user initiates (mouse movement can actually have a negative influence on your benchmark scores). |
©2024 CERN