Message boards : LHC@home Science : GPU Computing
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Profile Balmer

Send message
Joined: 22 Nov 05
Posts: 8
Credit: 91,168
RAC: 0
Message 25640 - Posted: 11 Jun 2013, 7:35:28 UTC
Last modified: 11 Jun 2013, 7:39:48 UTC

Hello ...

GPU COMPUTING is much, much, much, much, much faster than CPU Computing.

So it's very much appreciated to see a GPU Client for LHC@home (and other BOINC Projects).

I have contributed my CPU's and GPU's to BOINC (Berkeley Open Infrastructure for Network Computing) for more then 10 years (and the calculated number is so high that you can not even count it - hehe - and it's continuous counting/calculating and get higher and higher) [I have already achieved some real stuff for this world (with BOINC) and I'll make it / continue to operate ...] ... .

For GPU Computing (here is just 1 example) please have a look at: http://boinc.berkeley.edu/wiki/GPU_computing

Best Greetings

Balmer


Switzerland



PS: I have at the moment 2 ATI RADEON HD 9670 running for some projects that use GPU (Computing) - Billions/Trillions of Operations per Second. My Name by BOINC: Balmer
ID: 25640 · Report as offensive     Reply Quote
tom310

Send message
Joined: 28 Aug 12
Posts: 15
Credit: 500,336
RAC: 0
Message 25641 - Posted: 12 Jun 2013, 0:24:58 UTC - in response to Message 25640.  
Last modified: 12 Jun 2013, 0:46:04 UTC

This was already discussed elsewhere: http://lhcathomeclassic.cern.ch/sixtrack/forum_thread.php?id=3743

But I want to add just another two cents.

Since I joined this project, there was an idle time of about 50% of time where no WU's were available. With GPUs running, this would be 70% or higher, so I see no benefit here. Most of the WU's were crunched within a few days, which sounds ok for me considering the size and pace of work of an organization like CERN (please Eric, do not take this personally).
The tail of undone work lurking around for a longer time has two underlying reasons: computers storing huge numbers of WU's from different BOINC projects, always crunching data close to the corresponding deadline and computers downloading WU's and dropping out. I do not see why GPU users are more or less reliable than CPU users in both cases.
Lately, the server announced more than once that the amount of returned data was putting the Sixtrack infrastructure to the limit (disk-space-wise), GPU computing would... [I think you can extrapolate this].
Furthermore, and this is discussed in the above mentioned thread, GPU development is still fast-moving and immature. Manufacturer support is patchy at best, and often poor.
Having worked on Small Angle Neutron Scattering, I acknowledge the temptation to try new and more fancy things (I thought about writing toys) but I also see an incredible amount of effort to achieve only little result.
Maybe there is a need for faster computing in the future, especially when the project wants to study space charge effects with a very very large number of particles, there will be a place for GPUs. But I think this requires standardized GPU capabilities, so you do not need a custom-made application for each available GPU/driver version. And if Sixtrack focuses on Keppler, I already hear the other GPUs cry foul.
Having said all this, to me Sixtrack is an MVP (most valuable project) candidate in the BOINC family and I dedicate all I have (unfortunately only two old dual cores) to this project, although the pay, compared to other projects, is lousy (I must be a true scientist).
ID: 25641 · Report as offensive     Reply Quote
Eric Mcintosh
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist

Send message
Joined: 12 Jul 11
Posts: 852
Credit: 1,619,050
RAC: 0
Message 25642 - Posted: 12 Jun 2013, 11:49:43 UTC - in response to Message 25641.  

Thank you Tom; there is nothing I see to upset me. I would like to port to
GPU to prove my numeric compatibility for one, and also because they are there,
and as you say to get ready for space charge effects. I would also like
to use ARM processors . For the moment I want to concentrate on publishing
my paper on Floating-Point Result Replication and believe me that is very
time consuming in terms of testing. I hope to soon run very very many jobs
to get statistics on the BOINC result validation failures. I agree we seem to have
more capacity than we can use. Hopefully I can adapt a space charge application
to BOINC but I believe the programs use MPI mainly........we shall see. (Retired 7 years
now! but still hope to continue for a while yet!) Eric.
ID: 25642 · Report as offensive     Reply Quote
tullio

Send message
Joined: 19 Feb 08
Posts: 602
Credit: 3,745,015
RAC: 52
Message 25643 - Posted: 14 Jun 2013, 17:13:17 UTC
Last modified: 14 Jun 2013, 17:13:56 UTC

I am running 7 BOINC projects. Of those, only 2 (SETI and Einstein) can use a GPU. Although I have bought one card, I haven't installed it so far. What for?
Tullio
ID: 25643 · Report as offensive     Reply Quote
Profile Tom95134

Send message
Joined: 4 May 07
Posts: 250
Credit: 826,541
RAC: 0
Message 25644 - Posted: 15 Jun 2013, 4:09:03 UTC - in response to Message 25643.  

I am running 7 BOINC projects. Of those, only 2 (SETI and Einstein) can use a GPU. Although I have bought one card, I haven't installed it so far. What for?
Tullio


I also run SETI and Einstein but do not allow Einstein Tasks to run on the GPU. While I haven't tried it lately, I found that early on Einstein pushed the same longer running time Tasks out as both GPU and CPU tasks while SETI created short duration Tasks to run on the GPU. Since BOINC didn't (still doesn't?) do Tasks switching where GPU is concerned the longer Tasks of Einstein tended to hog the GPU whereas SETI didn't.

I haven't tried accepting GPU Tasks for both Projects (on the same machine) recently and maybe I need to see if the problem still occurs. Frankly, I am quite happy to allow only SETI to run GPU Tasks because then a large amount of SETI work gets pumped through. Einstein, Test4Theory, and LHC are running CPU Tasks.

Windows 7, INTEL i7-2600, 16GB RAM, GPU GeForce 450 GTS
ID: 25644 · Report as offensive     Reply Quote
tullio

Send message
Joined: 19 Feb 08
Posts: 602
Credit: 3,745,015
RAC: 52
Message 25645 - Posted: 15 Jun 2013, 13:18:37 UTC

I think Nvidia is using SETI@home as a testing ground for GPU processing. The Titan supercomputer uses 16000 Kepler boards. AMD is not supporting much GPU processing on its boards and the OpenCL development work is done by volunteers.
Tullio
ID: 25645 · Report as offensive     Reply Quote
Profile Tom95134

Send message
Joined: 4 May 07
Posts: 250
Credit: 826,541
RAC: 0
Message 25647 - Posted: 15 Jun 2013, 19:03:20 UTC

Just a followup to my previous posting about limiting GPU to SETI.

This morning I enabled NVIDIA GPU for Einstein. The Task that was downloaded has a Running time of 6 Hours. Normal CPU Tasks for Einstein are in the range of 2 hours.

Of course BOINC handles Task switching for CPU work but I think that GPU work is still locked to one Task until complete.

No New Tasks for Einstein until I see what the results are and how well it shares the sandbox with SETI.
ID: 25647 · Report as offensive     Reply Quote
Profile Tom95134

Send message
Joined: 4 May 07
Posts: 250
Credit: 826,541
RAC: 0
Message 25648 - Posted: 15 Jun 2013, 20:37:30 UTC - in response to Message 25647.  
Last modified: 15 Jun 2013, 20:38:19 UTC

Just a followup to my previous posting about limiting GPU to SETI.

This morning I enabled NVIDIA GPU for Einstein. The Task that was downloaded has a Running time of 6 Hours. Normal CPU Tasks for Einstein are in the range of 2 hours.

Of course BOINC handles Task switching for CPU work but I think that GPU work is still locked to one Task until complete.

No New Tasks for Einstein until I see what the results are and how well it shares the sandbox with SETI.


I just checked the progress on the Einstein GPU Task and it has been running for about 1hr 10 min continuously on the GPU. My conclusion is that a mix of long duration Tasks and short duration Tasks just don't play well together on the GPU.
ID: 25648 · Report as offensive     Reply Quote
dschull

Send message
Joined: 14 Jan 13
Posts: 2
Credit: 164,996
RAC: 0
Message 25652 - Posted: 19 Jun 2013, 22:12:29 UTC

I have not seen any GPU use from LHC@Home SixTrack at all, regardless of settings.

I read somewhere that SixTrack does not currently support GPU computing.

I have two 7750's crossfired and ready to work, but they idle at o%.

Am I correct with my presumptions?

ID: 25652 · Report as offensive     Reply Quote
tullio

Send message
Joined: 19 Feb 08
Posts: 602
Credit: 3,745,015
RAC: 52
Message 25654 - Posted: 20 Jun 2013, 0:06:32 UTC - in response to Message 25652.  

You are correct. Also Test4Theory@home (LHC@home 2) does not use GPUs.
Tullio
ID: 25654 · Report as offensive     Reply Quote
Profile MAGIC Quantum Mechanic
Avatar

Send message
Joined: 24 Oct 04
Posts: 920
Credit: 39,451,080
RAC: 9,323
Message 25655 - Posted: 21 Jun 2013, 0:40:10 UTC


Always plenty of GPU tasks @ Einstein (the new BRP5's)

They take 10X longer via GPU that the previous BRP4's

And I have been here and there since the beginning so I have done a couple tasks.

(even a Seti Classic member back in 1999 but I quit doing those)

6 GPU cards right now.

So yeah tullio you can install that new card (which one did you get? )

Btw we have talked about running our laptops before and just a couple days ago was the one year anniversary of my laptop running GPU tasks and CPU tasks 24/7

8 cores (just d/l some LHC's since there are some tasks)


Volunteer Mad Scientist For Life
ID: 25655 · Report as offensive     Reply Quote
tullio

Send message
Joined: 19 Feb 08
Posts: 602
Credit: 3,745,015
RAC: 52
Message 25658 - Posted: 21 Jun 2013, 16:50:15 UTC - in response to Message 25655.  

I bought a Sapphire HD 7770.But it needs a 500 W PSU, su I bought one too, a GX-Lite. But my mainboard is a 2008 vintage by SuperMicro on a SUN Ultra 20 M2 workstation and its present PSU has only 400 W and no cable to power the graphic board. So I have some doubts I can install the new PSU.
Tullio
ID: 25658 · Report as offensive     Reply Quote
Profile MAGIC Quantum Mechanic
Avatar

Send message
Joined: 24 Oct 04
Posts: 920
Credit: 39,451,080
RAC: 9,323
Message 25659 - Posted: 22 Jun 2013, 3:45:43 UTC


I have 5 of my GPU cards running off Ultra X4's (I got them all on sale when I got the 5 cards)

And a couple of the MB's are older 3-core with DDR2's so you can't add more than just over 4GB but the other ones are DDR3's so I added to those.

But the one I just added the GeForce 650Ti 2GB card to has the cheap 250watt that comes with it so what I did is plug the GeForce into it and then put it next to one of the pc's that has the Ultra X4 850watt PSU and plugged the power into that one that is also running the 660Ti in the other box.

All of mine are OC'd too.

If I knew it would work like this before I would have only got a couple PSU's and then plugged 2 GPU cards into the same PSU in the hosts sitting next to each other (and I have an extra 20in fan blowing across all 5 hosts)

I like these Ultra X4 and X3 PSU's since the price was just over $100 each and have the lifetime warranty.

So you can compare your MB to the ones I have on my older hosts.

http://lhcathomeclassic.cern.ch/sixtrack/hosts_user.php?userid=5472

My older 3-cores running DDR2's with XP Pro still work great here and doing Einstein GPU's and the T4T 2-core tasks and they never fail there either.





Volunteer Mad Scientist For Life
ID: 25659 · Report as offensive     Reply Quote
Profile Michael H.W. Weber

Send message
Joined: 18 Sep 04
Posts: 30
Credit: 5,071,350
RAC: 0
Message 30481 - Posted: 24 May 2017, 19:27:19 UTC

The discussion above is from mid 2013...

In the meantime, many additional DC projects have successfully released powerful GPU clients and with ATLAS, Theory, CMS & LHCb in addition to the classical LHC Sixtrack software, CERN now appears to have plenty of additional tasks ready for computation.

So, again: Is it time for GPU clients at CERN or are there some good reasons for why GPU computing still is not utiized?

Michael.
ID: 30481 · Report as offensive     Reply Quote
Jim1348

Send message
Joined: 15 Nov 14
Posts: 416
Credit: 11,880,818
RAC: 3,397
Message 30482 - Posted: 24 May 2017, 19:30:55 UTC - in response to Message 30481.  

VirtualBox does not support GPUs is a fairly good one.
ID: 30482 · Report as offensive     Reply Quote
Profile Michael H.W. Weber

Send message
Joined: 18 Sep 04
Posts: 30
Credit: 5,071,350
RAC: 0
Message 30484 - Posted: 24 May 2017, 21:13:44 UTC
Last modified: 24 May 2017, 21:29:29 UTC

Ok, let's exclude SixTrack if you like (which does not require Virtualbox):
For what reason(s) is Virtualbox essential to ATLAS, Theory, CMS and LHCb - especially when Linux clients are recruited (where no code compilation to another OS plus all the testing is required)?

Or in other words:
Assuming that porting the code to GPUs makes sense at all (i.e. can be implemented technically and scales well with a parallelized throughput - which for some applications is certainly not mandatory), wouldn't a significant increase in computational throughput justify omitting the Virtualbox environment generally or at least for Linux machines?
I mean, of course depending on the computational speed increase when using a GPU client, even the writing of Virtualbox snapshots might possibly become unnecessary.

Michael.
ID: 30484 · Report as offensive     Reply Quote
Profile Ben Segal
Volunteer moderator
Project administrator

Send message
Joined: 1 Sep 04
Posts: 122
Credit: 2,579
RAC: 0
Message 30486 - Posted: 25 May 2017, 4:39:42 UTC - in response to Message 30484.  
Last modified: 25 May 2017, 4:40:57 UTC

In fact the reason the LHC experiment and Theory applications don't use GPU is porting difficulty. Sixtrack is the exception and a GPU port is on the way, as is an ARM port, so let's not exclude Sixtrack.

Developing a Linux-only solution isn't worthwhile in general as Windows platforms are still around 90% of the volunteer population. But ATLAS has tried out a native Linux app without VirtualBox. Otherwise VBox remains vital for LHC@home whether we like it or not.

Sorry about that,

Ben
ID: 30486 · Report as offensive     Reply Quote
Profile Michael H.W. Weber

Send message
Joined: 18 Sep 04
Posts: 30
Credit: 5,071,350
RAC: 0
Message 30487 - Posted: 25 May 2017, 6:09:11 UTC - in response to Message 30486.  
Last modified: 25 May 2017, 6:37:25 UTC

Thanks a lot for the information Ben.

Porting SixTrack to ARM is a VERY good idea.
If you need volunteer testing, just let us know. Our team uses a lot of ARM devices. I personally have currently 4 of these running. Most of them are ODROIDs with different AMR-CPU types (Cortex-A9 Quad/-A15 Quad/-BigLittle Octa/-A53 Quad 64-bit) plus an NVIDIA Tegra K1 which I run for our DC organizatiom (this one is CUDA-capable and there is an Einstein@home GPU-client which Christian from our team/Einstein@home developed).

Is there any plan as to when the SixTrack client will become available for GPUs & ARM CPUs?

Michael.
ID: 30487 · Report as offensive     Reply Quote
Profile Michael H.W. Weber

Send message
Joined: 18 Sep 04
Posts: 30
Credit: 5,071,350
RAC: 0
Message 30488 - Posted: 25 May 2017, 8:34:17 UTC
Last modified: 25 May 2017, 8:37:28 UTC

In case the LHC@home team would like to wirte DC history once again:
Try to also port SixTrack to the OpenCL-capable (i.e. more recent) ARM Mali GPUs. To the best of my knowledge, so far there is not a single DC project supporting these types of GPUs.

Michael.
ID: 30488 · Report as offensive     Reply Quote
Harri Liljeroos
Avatar

Send message
Joined: 28 Sep 04
Posts: 423
Credit: 22,580,586
RAC: 5,644
Message 30490 - Posted: 25 May 2017, 11:17:19 UTC - in response to Message 30488.  

In case the LHC@home team would like to wirte DC history once again:
Try to also port SixTrack to the OpenCL-capable (i.e. more recent) ARM Mali GPUs. To the best of my knowledge, so far there is not a single DC project supporting these types of GPUs.

Michael.


I don't know about the ARM Mali GPUs, but Seti and Einstein both have OpenCL applications for Nvidia and AMD GPUs. And yes, they are pretty fast, but Seti has even faster Cuda application for Linux. Latest Seti applications were developped by the users and not by project.
ID: 30490 · Report as offensive     Reply Quote
1 · 2 · Next

Message boards : LHC@home Science : GPU Computing


©2020 CERN