1) Message boards : Number crunching : LHC on GPU ? (Message 26974)
Posted 9 Nov 2014 by TomTom
Post:
Talking about VM is off topic on this thread (LHC on GPU ?). Choose or create another thread for VM topic.
2) Message boards : Number crunching : LHC on GPU ? (Message 26972)
Posted 9 Nov 2014 by TomTom
Post:
That's one of the biggest drawback of VM; it requires a huge amount of RAM because 2 OS are running!!!
It is too heavy for common PC/Mac users which have "only" 4GB or 8GB of RAM installed (as you said tullio).
Further more, android platform (smartphone, tablet,...) can't run VM.
My advice is to stop thinking about VM for this project and to focus on GPU porting (Except if you working for a virtualization company).
Because with the app_config.xml BOINC feature, you can run several tasks on the same GPU.
Personnaly, all my GPUs run 2 tasks at the same time for all projects I participate to. And I have several GPUs on my PCs which represents a big efficient and potential ressource that LHC@home can't use.

My quote is: "I'm tired of computing 10h WUs while only 1h would be enough."
3) Message boards : Number crunching : Host messing up tons of results (Message 26962)
Posted 8 Nov 2014 by TomTom
Post:
Hi jelle.
Same problem with the same wingman. But don't worry, normally the WU will be send later to another wingman in order to confirm the correct result.
Maybe the server managers uses a theshold of invalid results to ban a user?
4) Message boards : Number crunching : LHC on GPU ? (Message 26959)
Posted 8 Nov 2014 by TomTom
Post:
IMHO, going virtual is closing doors.
1) A virtual machine is like a black box system where it is hard to evolve, improve algorithms, and use all the computing ressources of the host machine as the limitation is VirtualBox,
2) Using a virtual machine add a software layer beetween the host hardware and the computing application which is less efficient than no virtual. VirtualBox don't support type 1 hypervisor,
3) Ok, the virtual machine is the same for everybody which is simpler to manage for LHC@home developpers. But challenge it!!
4) You talk about seti@home: let's compare results. For example, computing this WU on GPU is 11 time faster than on CPU.

So if you go virtual, a WU currently computed in 10h on CPU, will be computed in, say, 12h-15h (TBD) on a virtual machine. Maybe it is not simple as that, but think about the 55mn on GPU!!!
5) Message boards : Number crunching : LHC on GPU ? (Message 26954)
Posted 6 Nov 2014 by TomTom
Post:
If your GPU is crunching less than an average of 70% of utilization, you can run several programs on 1 GPU. Just add the file app_config.xml in the BOINC Data \projects\<project> directory with the following content. Example for seti@home to run 2 programs/GPU:

<app_config>
<app>
<name>astropulse_v6</name>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>0.08</cpu_usage>
</gpu_versions>
</app>
<app>
<name>astropulse_v7</name>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>0.08</cpu_usage>
</gpu_versions>
</app>
<app>
<name>setiathome_v7</name>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>0.08</cpu_usage>
</gpu_versions>
</app>
</app_config>

For einstein@home, apps name are: einsteinbinary_BRP4, einsteinbinary_BRP4G, einsteinbinary_BRP5 and hsgamma_FGRP4
6) Message boards : Number crunching : LHC on GPU ? (Message 26951)
Posted 6 Nov 2014 by TomTom
Post:
Is code available for porting to GPU ?
What should be the policy :
- Specialized but efficient (CUDA for NVIDIA GPUs only)
- Standard but less efficient (OpenCL for ATI, Intel and NVIDIA GPUs)



©2024 CERN