21) Message boards : LHC@home Science : GPU Computing (Message 30487)
Posted 25 May 2017 by Profile Michael H.W. Weber
Post:
Thanks a lot for the information Ben.

Porting SixTrack to ARM is a VERY good idea.
If you need volunteer testing, just let us know. Our team uses a lot of ARM devices. I personally have currently 4 of these running. Most of them are ODROIDs with different AMR-CPU types (Cortex-A9 Quad/-A15 Quad/-BigLittle Octa/-A53 Quad 64-bit) plus an NVIDIA Tegra K1 which I run for our DC organizatiom (this one is CUDA-capable and there is an Einstein@home GPU-client which Christian from our team/Einstein@home developed).

Is there any plan as to when the SixTrack client will become available for GPUs & ARM CPUs?

Michael.
22) Message boards : LHC@home Science : GPU Computing (Message 30484)
Posted 24 May 2017 by Profile Michael H.W. Weber
Post:
Ok, let's exclude SixTrack if you like (which does not require Virtualbox):
For what reason(s) is Virtualbox essential to ATLAS, Theory, CMS and LHCb - especially when Linux clients are recruited (where no code compilation to another OS plus all the testing is required)?

Or in other words:
Assuming that porting the code to GPUs makes sense at all (i.e. can be implemented technically and scales well with a parallelized throughput - which for some applications is certainly not mandatory), wouldn't a significant increase in computational throughput justify omitting the Virtualbox environment generally or at least for Linux machines?
I mean, of course depending on the computational speed increase when using a GPU client, even the writing of Virtualbox snapshots might possibly become unnecessary.

Michael.
23) Message boards : LHC@home Science : GPU Computing (Message 30481)
Posted 24 May 2017 by Profile Michael H.W. Weber
Post:
The discussion above is from mid 2013...

In the meantime, many additional DC projects have successfully released powerful GPU clients and with ATLAS, Theory, CMS & LHCb in addition to the classical LHC Sixtrack software, CERN now appears to have plenty of additional tasks ready for computation.

So, again: Is it time for GPU clients at CERN or are there some good reasons for why GPU computing still is not utiized?

Michael.
24) Message boards : Sixtrack Application : 260.000 WUs to send, but no handed out (Message 30278)
Posted 10 May 2017 by Profile Michael H.W. Weber
Post:
Meanwhile, please feel free to churn our other applications.

Those other applications either require Virtualbox or a constant Internet connection. So, many of us can't support these generally and especially not before the LHC competition starts (For your information: to make it in the contest, teams acquire huge loads of tasks many days before the race actually starts. They bunker them, i.e. compute these tasks offline without uploading the results - which is done only once the race's start date arrives. Got the problem with the other apps / tasks now?).

Michael.
25) Message boards : Sixtrack Application : 260.000 WUs to send, but no handed out (Message 30272)
Posted 10 May 2017 by Profile Michael H.W. Weber
Post:
It really is a shame: LHC takes part in this year's BOINC Pentathlon but does not deliver Sixtrack tasks even in the minimally required amounts.

Now you had a nice opportunity to really acquire some worth-to-mention compute power and what are you doing? Possibly having a coffee?
Well, at least send one over to us, too!

Michael.
26) Message boards : LHC@home Science : Details on the results returned by project participants (Message 28452)
Posted 12 Jan 2017 by Profile Michael H.W. Weber
Post:
I was not just asking about information on the projects as a whole. My question rather relates to details on each of the jobs and whether these can be more elaborately visualized. Even the MCPLOTS data, at least to me, is not really giving much information.

Example of what I mean in a different context:
Say I contribute to Folding@home where proteins (chains of interconnected amino acids) are being folded on the basis of first principles. It *would* be possible to load each of my task results into a viewer program and then watch a short movie about how that particular protein I was working on folded during my individual simulation run (they do molecular dynamics simulations).

So, is there something similar you can think of for these more abstract particle physics projects? I believe, this would give the project a big boost.

Michael
27) Message boards : LHC@home Science : Details on the results returned by project participants (Message 28421)
Posted 10 Jan 2017 by Profile Michael H.W. Weber
Post:
I was wondering whether it would somehow be possible to get more detailed information on what is being done with the results we return to the project.

I mean, LHC@home harbors a number of sub-projects. When I run any of the simulations of any of the sub-projects, to what scientific question EXACTLY does that simulation correspond? What physical equation is in question to evaluate? Which (virtual) particles are generated during the simulation run? Can't these simulations be classified into distinct groups and the simulation outcome be somehow individually visualized?

Particle physics, at least to me, appears as quite an abstract topic and I believe that making these things a bit more accessible could help increase participation in this project.

Michael.
28) Message boards : Theory Application : vLHC app MC PLOTS results on LHC classic server? (Message 28420)
Posted 10 Jan 2017 by Profile Michael H.W. Weber
Post:
Thanks, Ben. ;-)

Michael.
29) Message boards : Theory Application : vLHC app MC PLOTS results on LHC classic server? (Message 28360)
Posted 6 Jan 2017 by Profile Michael H.W. Weber
Post:
I have a question: Now that the vLHC project has kind of migrated to the classic LHC@home website: Where do I find the MC PLOTS stats for all the CMS, Theory, etc. tasks of the former vLHC system which I have now computed on the LHC@home system?

Michael.
30) Message boards : LHC@home Science : Invitation to contribute an article (Message 14017)
Posted 17 Jun 2006 by Profile Michael H.W. Weber
Post:


Previous 20


©2022 CERN