Message boards : Number crunching : LHC and GPU computations
Message board moderation

To post messages, you must log in.

AuthorMessage
ripednail

Send message
Joined: 2 Aug 05
Posts: 4
Credit: 10,516
RAC: 0
Message 9240 - Posted: 9 Aug 2005, 2:23:05 UTC

www.Slashdot.org posted this link to http://gamma.cs.unc.edu/GPUSORT/documentation.html

I think it would be really cool, if my GPU was utilized for the LHC computations... i personally have a nvidia 5900, I think alot of people have high powered GPUS also.... wasted computing power ;)
ID: 9240 · Report as offensive     Reply Quote
Michael Karlinsky
Avatar

Send message
Joined: 18 Sep 04
Posts: 163
Credit: 1,682,370
RAC: 0
Message 9244 - Posted: 9 Aug 2005, 8:39:51 UTC

This would be cool too...

http://www.ageia.com/
Team Linux Users Everywhere
ID: 9244 · Report as offensive     Reply Quote
marc samarra

Send message
Joined: 13 Jul 05
Posts: 1
Credit: 13,630
RAC: 0
Message 9245 - Posted: 9 Aug 2005, 9:01:10 UTC

Audio processors are also becoming VERY powerful.
(Imagine having the power of a second 3.4GHz processor (more than 10,000MIPS) built into your PC that was dedicated just to audio! That's enough additional power to process more than 10 Billion instructions per second! With X-Fi that's exactly what you get! )

http://www.soundblaster.com/products/x-fi/technology/xfiaudio/
ID: 9245 · Report as offensive     Reply Quote
Profile Santas little helper

Send message
Joined: 14 Jul 05
Posts: 36
Credit: 582,943
RAC: 0
Message 9246 - Posted: 9 Aug 2005, 9:23:56 UTC
Last modified: 9 Aug 2005, 9:24:23 UTC

This is not a new topic at all, but very interesting. I think an interesting link would be this one: www.gpgpu.org
I never fully understood why this is such a big problem. Otherwise this computation power would still be used by some projects, woundn't it?
I think because of the fact that GPUs are partitially more complex and faster than a CPU this is wasted computing power. Maybe someone can explain whether it is possible or not. :)
ID: 9246 · Report as offensive     Reply Quote
Vedran Brnjetic
Avatar

Send message
Joined: 16 Jul 05
Posts: 24
Credit: 6,549
RAC: 0
Message 9250 - Posted: 9 Aug 2005, 10:38:17 UTC

I think it would be a lot of work to optimize apps to use many different GPUs becouse there are so much architectures.
Particullary for this project where are significant differences in results between AMD and Intel processors, not to mension differences between each model.
<img border="0" src="http://boinc.mundayweb.com/one/stats.php?userID=2328&amp;trans=off" />
<img border="0" src="http://boinc.mundayweb.com/one/stats.php?userID=2328&amp;prj=5&amp;trans=off" />
ID: 9250 · Report as offensive     Reply Quote
Profile Chrulle

Send message
Joined: 27 Jul 04
Posts: 182
Credit: 1,880
RAC: 0
Message 9253 - Posted: 9 Aug 2005, 11:59:17 UTC
Last modified: 9 Aug 2005, 11:59:25 UTC

The problem with GPUs are that they only use 32 bit floating point numbers so they are pretty unusable for any science code that uses double precision (like sixtrack). I am also pretty sure that they do not support the full rounding standard for the IEEE floating point specification, which would be a huge problem for sixtrack.

Another problem is that while the AGP port is extremely fast at transfering data to the graphics card they are often very slow at getting the results back to the main memory. Maybe the PCI express is better, but i do not know.


Chrulle
Research Assistant &amp; Ex-LHC@home developer
Niels Bohr Institute
ID: 9253 · Report as offensive     Reply Quote
ripednail

Send message
Joined: 2 Aug 05
Posts: 4
Credit: 10,516
RAC: 0
Message 9283 - Posted: 9 Aug 2005, 17:04:00 UTC - in response to Message 9253.  

<blockquote>The problem with GPUs are that they only use 32 bit floating point numbers so they are pretty unusable for any science code that uses double precision (like sixtrack). I am also pretty sure that they do not support the full rounding standard for the IEEE floating point specification, which would be a huge problem for sixtrack.

Another problem is that while the AGP port is extremely fast at transfering data to the graphics card they are often very slow at getting the results back to the main memory. Maybe the PCI express is better, but i do not know.

</blockquote>

Thank you the prompt anwser...

The nvidia is quite a popular 'high powered'... if you could take a quick look at the source to confirm...

Instead of sending results back to the system memory, why not send them back to the graphics card memory... i have the cheepy 5900 with 128 MB (more than enyough for LHC), the newer cards come with 256 to 512 mb of memory... I guess it would be really cool if someone looked into it with a bit more detail...

Not really sure how open source nvidia gets with low level functions of thier graphics cards :(

Keep in mind that my comments are that of a novis....
ID: 9283 · Report as offensive     Reply Quote
ripednail

Send message
Joined: 2 Aug 05
Posts: 4
Credit: 10,516
RAC: 0
Message 9284 - Posted: 9 Aug 2005, 17:05:40 UTC - in response to Message 9246.  

<blockquote>This is not a new topic at all, but very interesting. I think an interesting link would be this one: www.gpgpu.org
I never fully understood why this is such a big problem. Otherwise this computation power would still be used by some projects, woundn't it?
I think because of the fact that GPUs are partitially more complex and faster than a CPU this is wasted computing power. Maybe someone can explain whether it is possible or not. :)</blockquote>

very nice link, i will look into that one only becuase i seen C++ development ;)
ID: 9284 · Report as offensive     Reply Quote
Profile Santas little helper

Send message
Joined: 14 Jul 05
Posts: 36
Credit: 582,943
RAC: 0
Message 9286 - Posted: 9 Aug 2005, 17:22:13 UTC

I know this site for quite a long time but doesn't manage to go into depth with this problem. Let us know about interesting facts ripednail
ID: 9286 · Report as offensive     Reply Quote
Desti

Send message
Joined: 16 Jul 05
Posts: 84
Credit: 1,875,851
RAC: 0
Message 9303 - Posted: 9 Aug 2005, 22:13:41 UTC - in response to Message 9253.  

<blockquote>

Another problem is that while the AGP port is extremely fast at transfering data to the graphics card they are often very slow at getting the results back to the main memory. Maybe the PCI express is better, but i do not know.

</blockquote>

PCIe is able to transfer data in both directions with the same speed.
Linux Users Everywhere @ BOINC
[url=http://lhcathome.cern.ch/team_display.php?teamid=717]
ID: 9303 · Report as offensive     Reply Quote
ripednail

Send message
Joined: 2 Aug 05
Posts: 4
Credit: 10,516
RAC: 0
Message 9307 - Posted: 9 Aug 2005, 23:33:25 UTC - in response to Message 9286.  

<blockquote>I know this site for quite a long time but doesn't manage to go into depth with this problem. Let us know about interesting facts ripednail</blockquote>

Quick update...

sounds like this article explains alot.... I rember my linnear algebra teacher talking about matrices and how computers generate images on screens.... I guess GPUs are good at computeing them and further more it looks as if alot of people are doing open source work ;)

It would make me feel good inside to know that my graphics card was being used 100% when i was not around :)

http://gamma.cs.unc.edu/LU-GPU/lugpu05.pdf
ID: 9307 · Report as offensive     Reply Quote
Profile Santas little helper

Send message
Joined: 14 Jul 05
Posts: 36
Credit: 582,943
RAC: 0
Message 9318 - Posted: 10 Aug 2005, 9:46:19 UTC - in response to Message 9307.  

<blockquote>... it looks as if alot of people are doing open source work ;)

It would make me feel good inside to know that my graphics card was being used 100% when i was not around :)

http://gamma.cs.unc.edu/LU-GPU/lugpu05.pdf</blockquote>

Open source - yeah 8-) ...

Nice link, I'll try to read it and hope to understand a bit.

And the thing with the heavily busy graphics card: nothing to add!
ID: 9318 · Report as offensive     Reply Quote

Message boards : Number crunching : LHC and GPU computations


©2024 CERN