Questions and Answers :
Unix/Linux :
Verify GPU is being utilized, how to?
Message board moderation
Author | Message |
---|---|
Send message Joined: 11 Apr 17 Posts: 39 Credit: 7,735,161 RAC: 0 |
Is LHC code able to utilize the GPU(s?) How can I tell if they are being utilized now or not? GPUtrip (or something like that) seemed to be working just fine. (This post created on the computer with the strongest GPU I have at the moment.) |
Send message Joined: 15 Jun 08 Posts: 2413 Credit: 226,555,663 RAC: 131,251 |
Currently none of the LHC@home apps is doing scientific calculations on GPUs. See the app list here: https://lhcathome.cern.ch/lhcathome/apps.php Last October Riccardo posted a comment about GPU app development: https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5173&postid=40258 |
Send message Joined: 11 Apr 17 Posts: 39 Credit: 7,735,161 RAC: 0 |
Dang, that sure seems shortsided. For all the math-heavy aspects of the LHC project, it at least bears looking into. Pretty easy to move a virtual environment to a container design. However, IF their algorithm(s) are not good candidates for the array processing that GPU's are so outstandingly good at, then my point is moot. However, I just betcha it is. Just think how much faster (sometimes better than 1000x) the sims and analyses could progress if the GPU's were utilized. |
Send message Joined: 2 May 07 Posts: 2101 Credit: 159,819,191 RAC: 123,837 |
Accuracy before speed. You can see this also in WCG. |
Send message Joined: 11 Apr 17 Posts: 39 Credit: 7,735,161 RAC: 0 |
As long as long doubles, or at least long floats are used in the GPU, accuracy should still be equal to CPU runs, AIUI. We have the option (in the software) to truncate to 16 bits or even ints sometimes, as needed. Why would we use shorts or ints in scientific calcs? |
©2024 CERN