Message boards :
Number crunching :
LHC and GPU computations
Message board moderation
Author | Message |
---|---|
Send message Joined: 2 Aug 05 Posts: 4 Credit: 10,516 RAC: 0 |
www.Slashdot.org posted this link to http://gamma.cs.unc.edu/GPUSORT/documentation.html I think it would be really cool, if my GPU was utilized for the LHC computations... i personally have a nvidia 5900, I think alot of people have high powered GPUS also.... wasted computing power ;) |
Send message Joined: 18 Sep 04 Posts: 163 Credit: 1,682,370 RAC: 0 |
|
Send message Joined: 13 Jul 05 Posts: 1 Credit: 13,630 RAC: 0 |
Audio processors are also becoming VERY powerful. (Imagine having the power of a second 3.4GHz processor (more than 10,000MIPS) built into your PC that was dedicated just to audio! That's enough additional power to process more than 10 Billion instructions per second! With X-Fi that's exactly what you get! ) http://www.soundblaster.com/products/x-fi/technology/xfiaudio/ |
Send message Joined: 14 Jul 05 Posts: 36 Credit: 582,943 RAC: 0 |
This is not a new topic at all, but very interesting. I think an interesting link would be this one: www.gpgpu.org I never fully understood why this is such a big problem. Otherwise this computation power would still be used by some projects, woundn't it? I think because of the fact that GPUs are partitially more complex and faster than a CPU this is wasted computing power. Maybe someone can explain whether it is possible or not. :) |
Send message Joined: 16 Jul 05 Posts: 24 Credit: 6,549 RAC: 0 |
I think it would be a lot of work to optimize apps to use many different GPUs becouse there are so much architectures. Particullary for this project where are significant differences in results between AMD and Intel processors, not to mension differences between each model. <img border="0" src="http://boinc.mundayweb.com/one/stats.php?userID=2328&trans=off" /> <img border="0" src="http://boinc.mundayweb.com/one/stats.php?userID=2328&prj=5&trans=off" /> |
Send message Joined: 27 Jul 04 Posts: 182 Credit: 1,880 RAC: 0 |
The problem with GPUs are that they only use 32 bit floating point numbers so they are pretty unusable for any science code that uses double precision (like sixtrack). I am also pretty sure that they do not support the full rounding standard for the IEEE floating point specification, which would be a huge problem for sixtrack. Another problem is that while the AGP port is extremely fast at transfering data to the graphics card they are often very slow at getting the results back to the main memory. Maybe the PCI express is better, but i do not know. Chrulle Research Assistant & Ex-LHC@home developer Niels Bohr Institute |
Send message Joined: 2 Aug 05 Posts: 4 Credit: 10,516 RAC: 0 |
<blockquote>The problem with GPUs are that they only use 32 bit floating point numbers so they are pretty unusable for any science code that uses double precision (like sixtrack). I am also pretty sure that they do not support the full rounding standard for the IEEE floating point specification, which would be a huge problem for sixtrack. Another problem is that while the AGP port is extremely fast at transfering data to the graphics card they are often very slow at getting the results back to the main memory. Maybe the PCI express is better, but i do not know. </blockquote> Thank you the prompt anwser... The nvidia is quite a popular 'high powered'... if you could take a quick look at the source to confirm... Instead of sending results back to the system memory, why not send them back to the graphics card memory... i have the cheepy 5900 with 128 MB (more than enyough for LHC), the newer cards come with 256 to 512 mb of memory... I guess it would be really cool if someone looked into it with a bit more detail... Not really sure how open source nvidia gets with low level functions of thier graphics cards :( Keep in mind that my comments are that of a novis.... |
Send message Joined: 2 Aug 05 Posts: 4 Credit: 10,516 RAC: 0 |
<blockquote>This is not a new topic at all, but very interesting. I think an interesting link would be this one: www.gpgpu.org I never fully understood why this is such a big problem. Otherwise this computation power would still be used by some projects, woundn't it? I think because of the fact that GPUs are partitially more complex and faster than a CPU this is wasted computing power. Maybe someone can explain whether it is possible or not. :)</blockquote> very nice link, i will look into that one only becuase i seen C++ development ;) |
Send message Joined: 14 Jul 05 Posts: 36 Credit: 582,943 RAC: 0 |
I know this site for quite a long time but doesn't manage to go into depth with this problem. Let us know about interesting facts ripednail |
Send message Joined: 16 Jul 05 Posts: 84 Credit: 1,875,851 RAC: 0 |
<blockquote> Another problem is that while the AGP port is extremely fast at transfering data to the graphics card they are often very slow at getting the results back to the main memory. Maybe the PCI express is better, but i do not know. </blockquote> PCIe is able to transfer data in both directions with the same speed. Linux Users Everywhere @ BOINC [url=http://lhcathome.cern.ch/team_display.php?teamid=717] |
Send message Joined: 2 Aug 05 Posts: 4 Credit: 10,516 RAC: 0 |
<blockquote>I know this site for quite a long time but doesn't manage to go into depth with this problem. Let us know about interesting facts ripednail</blockquote> Quick update... sounds like this article explains alot.... I rember my linnear algebra teacher talking about matrices and how computers generate images on screens.... I guess GPUs are good at computeing them and further more it looks as if alot of people are doing open source work ;) It would make me feel good inside to know that my graphics card was being used 100% when i was not around :) http://gamma.cs.unc.edu/LU-GPU/lugpu05.pdf |
Send message Joined: 14 Jul 05 Posts: 36 Credit: 582,943 RAC: 0 |
<blockquote>... it looks as if alot of people are doing open source work ;) It would make me feel good inside to know that my graphics card was being used 100% when i was not around :) http://gamma.cs.unc.edu/LU-GPU/lugpu05.pdf</blockquote> Open source - yeah 8-) ... Nice link, I'll try to read it and hope to understand a bit. And the thing with the heavily busy graphics card: nothing to add! |
©2024 CERN