1) Message boards : Number crunching : LHC and GPU computations (Message 9307)
Posted 9 Aug 2005 by ripednail
Post:
<blockquote>I know this site for quite a long time but doesn't manage to go into depth with this problem. Let us know about interesting facts ripednail</blockquote>

Quick update...

sounds like this article explains alot.... I rember my linnear algebra teacher talking about matrices and how computers generate images on screens.... I guess GPUs are good at computeing them and further more it looks as if alot of people are doing open source work ;)

It would make me feel good inside to know that my graphics card was being used 100% when i was not around :)

http://gamma.cs.unc.edu/LU-GPU/lugpu05.pdf
2) Message boards : Number crunching : LHC and GPU computations (Message 9284)
Posted 9 Aug 2005 by ripednail
Post:
<blockquote>This is not a new topic at all, but very interesting. I think an interesting link would be this one: www.gpgpu.org
I never fully understood why this is such a big problem. Otherwise this computation power would still be used by some projects, woundn't it?
I think because of the fact that GPUs are partitially more complex and faster than a CPU this is wasted computing power. Maybe someone can explain whether it is possible or not. :)</blockquote>

very nice link, i will look into that one only becuase i seen C++ development ;)
3) Message boards : Number crunching : LHC and GPU computations (Message 9283)
Posted 9 Aug 2005 by ripednail
Post:
<blockquote>The problem with GPUs are that they only use 32 bit floating point numbers so they are pretty unusable for any science code that uses double precision (like sixtrack). I am also pretty sure that they do not support the full rounding standard for the IEEE floating point specification, which would be a huge problem for sixtrack.

Another problem is that while the AGP port is extremely fast at transfering data to the graphics card they are often very slow at getting the results back to the main memory. Maybe the PCI express is better, but i do not know.

</blockquote>

Thank you the prompt anwser...

The nvidia is quite a popular 'high powered'... if you could take a quick look at the source to confirm...

Instead of sending results back to the system memory, why not send them back to the graphics card memory... i have the cheepy 5900 with 128 MB (more than enyough for LHC), the newer cards come with 256 to 512 mb of memory... I guess it would be really cool if someone looked into it with a bit more detail...

Not really sure how open source nvidia gets with low level functions of thier graphics cards :(

Keep in mind that my comments are that of a novis....
4) Message boards : Number crunching : LHC and GPU computations (Message 9240)
Posted 9 Aug 2005 by ripednail
Post:
www.Slashdot.org posted this link to http://gamma.cs.unc.edu/GPUSORT/documentation.html

I think it would be really cool, if my GPU was utilized for the LHC computations... i personally have a nvidia 5900, I think alot of people have high powered GPUS also.... wasted computing power ;)



©2024 CERN