Message boards : LHC@home Science : LHCb and the GPU
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile Magic Quantum Mechanic
Avatar

Send message
Joined: 24 Oct 04
Posts: 1114
Credit: 49,501,728
RAC: 4,157
Message 42819 - Posted: 9 Jun 2020, 19:26:49 UTC

https://openlab.cern/project/testbed-gpu-accelerated-applications

https://openlab.cern/allen-initiative-supported-cern-openlab-key-lhcb-trigger-upgrade

A new system will soon see the first stage of the LHCb Experiment at CERN’s data-filtering process run on GPUs.
Investigations have shown that some algorithms can be made to run more efficiently on GPUs than on standard, general-purpose CPU computing chips.
With data rates set to increase alongside upgrades to the Large Hadron Collider (LHC), it is important make sure the systems in place for filtering and analyzing particle collision events are as efficient as possible.
ID: 42819 · Report as offensive     Reply Quote
Jim1348

Send message
Joined: 15 Nov 14
Posts: 602
Credit: 24,371,321
RAC: 0
Message 42820 - Posted: 9 Jun 2020, 20:44:20 UTC - in response to Message 42819.  

Remarkable. Who would have thought that LHCb could be run on the GPU?
If they can figure out how to package them up for us, we will be well pleased.
ID: 42820 · Report as offensive     Reply Quote
Sesson

Send message
Joined: 4 Apr 19
Posts: 31
Credit: 3,547,441
RAC: 14,302
Message 42821 - Posted: 10 Jun 2020, 4:36:25 UTC - in response to Message 42820.  

The application LHCb run on GPU is the trigger system HLT 1, which reduces data rate from 40Tbps to 1-2Tbps. I don't think you have 40Tbps internet connection at home, so we certainly won't participate in running this application.

However any application migration from CPU to GPU will be huge improvement, as CERN needs huge amount of CPU computing power everywhere. The CPU time saved here could be used to do something more complex.

It is very typical that HEP application suffers from low precision, numerical overflow and underflow, as the numbers encountered in particle physics can have a value subatomic tiny or astronomically large. Double precision is certainly needed, they even want quad-precision if available and fast enough. GPUs are not well-prepared for the tough challenge of high-precision computation, and people at CERN would want to embrace GPU computation more than we would.
ID: 42821 · Report as offensive     Reply Quote
Profile Magic Quantum Mechanic
Avatar

Send message
Joined: 24 Oct 04
Posts: 1114
Credit: 49,501,728
RAC: 4,157
Message 42827 - Posted: 10 Jun 2020, 20:21:20 UTC

That is why Nvidia is involved with this.......you know they love the Einstein project since they get all those customers buying the expensive cards all the time..

And I'm sure they can set this up so it would work here and we have a couple LHC test sites to try it out.
But by then it will probably be for the newest version from Nvidia

I remember almost 8 years ago when I got the GeForce 660Ti SC when it was new and it was $300 but by now it is not close to being one of the fast cards with the most memory.
But I still have it running 24/7 so that is way past the warranty ( 113,900,154 credits so far )

I have several 650Ti and 550Ti that I stopped running there a couple years ago so I could only run the cores here and doing testing for Cern (and the video cards run up the light bill too much)
ID: 42827 · Report as offensive     Reply Quote
tullio

Send message
Joined: 19 Feb 08
Posts: 708
Credit: 4,336,250
RAC: 0
Message 42828 - Posted: 11 Jun 2020, 10:40:40 UTC

I have a GTX 1060 with 3GB Video RAM and at Einstein@home they asked me not to run Gravitational Waves GPU tasks since it can give wrong results. But I have run many GW tasks with no errors and they were validated by wingmen with 2 GB Video RAM. So I left Einstein@home after many years of running it.
Tullio
ID: 42828 · Report as offensive     Reply Quote
Jim1348

Send message
Joined: 15 Nov 14
Posts: 602
Credit: 24,371,321
RAC: 0
Message 42829 - Posted: 11 Jun 2020, 11:56:04 UTC - in response to Message 42828.  

I have a GTX 1060 with 3GB Video RAM and at Einstein@home they asked me not to run Gravitational Waves GPU tasks since it can give wrong results.

That is strange. The 1060 is a good card. I have a couple of them and use them on a variety of projects. But on Einstein I use an RX 570, since it is a lot faster at their OpenCl app.
ID: 42829 · Report as offensive     Reply Quote
Harri Liljeroos
Avatar

Send message
Joined: 28 Sep 04
Posts: 674
Credit: 43,150,492
RAC: 15,942
Message 42830 - Posted: 11 Jun 2020, 12:39:26 UTC - in response to Message 42829.  

I have a GTX 1060 with 3GB Video RAM and at Einstein@home they asked me not to run Gravitational Waves GPU tasks since it can give wrong results.

That is strange. The 1060 is a good card. I have a couple of them and use them on a variety of projects. But on Einstein I use an RX 570, since it is a lot faster at their OpenCl app.

The reason being that some of the Gravitation Wave tasks require more than 3 GB memory (but not all of them). The Gamma Ray Pulse tasks don't need that much memory so they would be OK to run on cards with even 2 GB of memory.
ID: 42830 · Report as offensive     Reply Quote
Jim1348

Send message
Joined: 15 Nov 14
Posts: 602
Credit: 24,371,321
RAC: 0
Message 42831 - Posted: 11 Jun 2020, 12:49:58 UTC - in response to Message 42830.  

OK, that makes sense.
ID: 42831 · Report as offensive     Reply Quote
tullio

Send message
Joined: 19 Feb 08
Posts: 708
Credit: 4,336,250
RAC: 0
Message 42832 - Posted: 11 Jun 2020, 13:35:06 UTC

On the same 1060 I am running GPUGRID tasks, with good results and no error.
Tullio
ID: 42832 · Report as offensive     Reply Quote
Jim1348

Send message
Joined: 15 Nov 14
Posts: 602
Credit: 24,371,321
RAC: 0
Message 42833 - Posted: 11 Jun 2020, 13:43:58 UTC - in response to Message 42832.  

GPUGrid is CUDA, a whole different ball game.
But that is what I like about the LHCb work (getting somewhat back on topic). It will work well with Nvidia, if it works at all.
ID: 42833 · Report as offensive     Reply Quote
Profile Magic Quantum Mechanic
Avatar

Send message
Joined: 24 Oct 04
Posts: 1114
Credit: 49,501,728
RAC: 4,157
Message 42836 - Posted: 11 Jun 2020, 19:53:14 UTC - in response to Message 42833.  

GPUGrid is CUDA, a whole different ball game.
But that is what I like about the LHCb work (getting somewhat back on topic). It will work well with Nvidia, if it works at all.


Yes that is why I mentioned Nvidia and all my cards have been Nvidia over at Einstein over the years but the last few years some of the members have started using other versions AMD (that seem to be less money for the power and cards)

You could always see over there that members would try running Einstein GPU's and have problems and then say they worked on another project..........for some reason........you know since they are different projects and they do different things yet we still get members saying they work over at X but not here at Z

And if you have been running GPU's long enough you should be able to figure that out.......but then I don't just try every project like they are video games and have ONLY run GPU's at Einstein because of what the project is for (and all more CPU's are for Cern)

BTW Jim are you going to try your NVIDIA GeForce 1660 Ti 4GB over at Einstein
I just looked and they have the GIGABYTE GeForce GTX 1660 Ti OC 6G for $289 where I get my parts.
(funny thing is it says free shipping from the UK all the way to over here)

And I'm sure you know of some of our members over there have over 1billion credits and a house full of home built pc's
I passed the 408 million mark this week over there and since I stopped running the 650Ti OC cards I always wanted to see how they did in my newer 8-core 28GB ram pc's but never got around to doing it since I always have them busy running Cern test tasks and many of them here too.
ID: 42836 · Report as offensive     Reply Quote
Jim1348

Send message
Joined: 15 Nov 14
Posts: 602
Credit: 24,371,321
RAC: 0
Message 42837 - Posted: 11 Jun 2020, 20:11:17 UTC - in response to Message 42836.  
Last modified: 11 Jun 2020, 20:54:33 UTC

BTW Jim are you going to try your NVIDIA GeForce 1660 Ti 4GB over at Einstein
I just looked and they have the GIGABYTE GeForce GTX 1660 Ti OC 6G for $289 where I get my parts.
(funny thing is it says free shipping from the UK all the way to over here)

Probably not. I have tried the GTX 1060 and 1070 on Einstein (both GW and FGRP) and was generally unimpressed.
I use the efficiency (output per watt-hour) as my yardstick, not just how fast they are. My RX 570 puts them both to shame as I recall.
Normally, Nvidia does OK on OpenCl, but for some reason not on Einstein that I have found. YMMV.

Here are my most recent results (Win7 64-bit) with the RX 570 on GW. They averaged a little over 30 minutes.
https://einsteinathome.org/host/12799653/tasks/0/0
Einstein is one of the few projects that does as well on Windows as Linux. I have an identical RX 570 on Ubuntu 18.04.4, and it does about the same there.

OK, since you asked, digging through my old notes: For the FGRP work units, as compared to my GTX 750 Ti, one of the most efficient Nvidia cards made, the RX 570 is 76% more efficient. But that was in November 2018. The GW are a different story, but generally still show the same trend on the GPU, just needing more CPU support.

PPS - The GTX 1660 Ti is about the most efficient card I have on Folding and probably GPUGrid, except maybe for my RTX 2060. Use them there.
ID: 42837 · Report as offensive     Reply Quote
maeax

Send message
Joined: 2 May 07
Posts: 2071
Credit: 156,089,092
RAC: 103,852
Message 42853 - Posted: 12 Jun 2020, 18:56:36 UTC - in response to Message 42828.  

I have a GTX 1060 with 3GB Video RAM and at Einstein@home they asked me not to run Gravitational Waves GPU tasks since it can give wrong results. But I have run many GW tasks with no errors and they were validated by wingmen with 2 GB Video RAM. So I left Einstein@home after many years of running it.
Tullio

Running GW from Einstein since today with AMD WX7100. No problems so long.
LHCb and Nvidia: Open CL would be good to let it also running with AMD-GPU's.
ID: 42853 · Report as offensive     Reply Quote
[VENETO] boboviz
Avatar

Send message
Joined: 7 May 08
Posts: 190
Credit: 1,499,854
RAC: 200
Message 42875 - Posted: 14 Jun 2020, 13:40:12 UTC - in response to Message 42821.  

The application LHCb run on GPU is the trigger system HLT 1, which reduces data rate from 40Tbps to 1-2Tbps. I don't think you have 40Tbps internet connection at home, so we certainly won't participate in running this application.

Lchb not, but maybe Sixtrack....
Point 5:
For optimal performance and hardware-utilisation, it is crucial to share the particle state in-place between the codes SixTrackLib (used for tracking single particles following collisions) and PyHEADTAIL (used for simulating macro-particle beam dynamics). This helps to avoid the memory and run-time costs of maintaining two copies on the GPU. The current implementation allows this by virtue of implicit context sharing, enabling seamless hand-off of control over the shared state between the two code-bases. After a first proof-of-concept implementation was created at an E4-NVIDIA hackathon in April 2019, the solution was refined and the API of the libraries was adapted to support this mode of operation.

Work carried out within this project made it possible for PyHEADTAIL to rely on SixTrackLib for high-performance tracking on GPUs, resulting in performance improvements of up to two-to-three orders of magnitude compared to state-of-the-art single-threaded CPU-based code. We also exposed SixTrackLib to new applications and use cases for particle tracking, which led to several improvements and bug fixes.
ID: 42875 · Report as offensive     Reply Quote

Message boards : LHC@home Science : LHCb and the GPU


©2024 CERN