1) Message boards : Number crunching : LHC+@Home (Message 22527)
Posted 7 Sep 2010 by Skysnake
Post:
Google helps to find the url ;)

Very nice to see, that it´s going on with the project.
2) Message boards : Cafe LHC : Whatever happened to LHC@home....apologies and thanks. (Message 22502)
Posted 29 Aug 2010 by Skysnake
Post:
OMG Finaly it comes to a good end.

Now only the Duke have to come back and the world would be a better place D:
3) Message boards : Cafe LHC : Whatever happened to LHC@home....apologies and thanks. (Message 22435)
Posted 2 Aug 2010 by Skysnake
Post:
Have you tried for CERN as
a student/associate/fellow or
whatever?


We have lots of Prof. that work for LHC Projects and so also much Diplomarbeiten for LHC projects.
4) Message boards : Cafe LHC : Whatever happened to LHC@home....apologies and thanks. (Message 22419)
Posted 11 Jul 2010 by Skysnake
Post:
Been a long time, too long.
It is all beginning to happen I think and a plan is finally being put in place.

Can\\\\\\\'t say much yet. But planning to
upgrade both server and api
and \\\\\\\"everything\\\\\\\" else.


Nice to hear. When you need help for coding or so, say it. I´am a physics student in Heidelberg, and have to write my Diplomarbeit in 1 or 2 years, so perhaps we can work together ;)

And when not for this, i could use the work perhaps for a training that i have to do in computer science (hardware and software).
5) Message boards : Cafe LHC : Whatever happened to LHC@home....apologies and thanks. (Message 22346)
Posted 22 May 2010 by Skysnake
Post:


Even better news is that CERN will now tolerate our work,
and even host the servers, and we are hoping to have server
side support closer to home. More on this if it comes to pass.

In the meantime thanks for your patience and continued support.

I would really like to get compiler independence and then a MAC
and CUDA executable..............dream on as the first priority
must be to get an up to date stable service.


Nice to hear. *thumbs up

Perhaps much more support in the future?
6) Message boards : Number crunching : Two announcements (Message 22291)
Posted 30 Apr 2010 by Skysnake
Post:

would be interesting to bring the speed of LHC on a fast analysis, but understandable that the information is hugh and might be a tremendous job to split in and have it send throughout the world.
but surely with the right people supporting your efforts of getting this online, you might find a solution in a longer term, reducing the work send out by preparing the bulk into partial calculated versions, that need to be calculated furthermore and only the outcome to be validated in a 3 way version for exemple.


Last weak i have a nice conversation with a study colleague. He have a Hiwi job at a institute and works there with LHCb datas. The told me, that he have there a nice cluster :)

Over the files he say, that they are real 4 TB! huge. On my questen if he REALY work with this huge files, he answered, that at the beginning he tried it, but that was toooo slow. So now he split the files into 500 MB partes and let them crunch on the cluster. He say a big problem was, that the 4 TB files are very bad to handle and have a special file format, that bring some problems.

He also say, that they will get a new cluster, what kind of cluster he didnßt know, but from somebody else, i heared, that the institut will get a GPU-cluster.

So i hope, that LHC@home will get a new chance to help.
7) Message boards : Number crunching : Two announcements (Message 22028)
Posted 9 Mar 2010 by Skysnake
Post:
I don´t say, that Fortran is a bad language. It would say it is nice for CPU, because it is so old and so highly optimised, like i said, but you have no GPU support, and GPU´s should be great for tracing.

Perhaps in my Diplomarbeit i can do something with GPU´s in this direction. Atm my problem is a lag of time for things like this on the one hand, and on the other, the deficit of datas and source code for other platforms, so that i have some basics to start from them.
8) Message boards : Number crunching : Two announcements (Message 22020)
Posted 9 Mar 2010 by Skysnake
Post:

Yes they should however no one has coded LHC applications for GPUs. The emergence of GPUs is extremely recent in a community where Fortran is still used and building of the machine started 17 years ago.


Ok, but isn´t this conservative and unprogressive? I mean, CERN is THE place for since. They have there the chance to find new physik and have to find new ways and theories to explain it. And on the other hand they are so small-minded an just see fortran, because they allways work with fortrane. Ok ok Fortran is very performance, beacause you have lot´s of librarys etc that are highly optimiced. But is it impossibel to bring them to C++ or so?

I have a very interesting discussion with some staff in the university and they say: yes some especial the older ones just work with fortrane and thats it. They are not open for new things. I think that´s a big mistake.

Look on Einstein@home. Perhaps they perhaps have found the shortest timelength (that is much larger than expected). Without the help of Boinc this chance for realy new science wouldn´t be given, because they haven´t the resources for this without BOINC.

Especial CPU time is allways limited, no matter how much you have, you allways can find new thinks to solve, or just crunch with a higher resolution. Why is CERN just say no to free CPU time? Ok it would be lot´s of work to bring \"equal\" performance to a home PC than to a Server in a serverfarm, but you haven´t to pay for it! So who cares about performance realy? Ohhh bad boy, your programm on Boinc is 10 times slower then on my cool serverfarm... And? Who cares? You haven´t to pay for current etc. Every singel solution you get is more or less for free! How much do CERN spend every year for hardware and maintenance? 20 Mio Euro or more? Just if you can save 0.1%, than this is a lot.

There must be so much work to do, that can solve on a normal PC very easy. Just find the trace of the particels through the detector should be a significant part of the work or not? And you can solve every singel event totaly independent from the other events (i can´t imagine something that avoid parallel solving) . Please correct me when i am wrong.

Summing up i want to say, that when i read whot you write here, and think about it, then must say, that it is a shame that you have so littel support from CERN.

PS: Yes i know Bigmac and i think he makes a great job here.
9) Message boards : Number crunching : Two announcements (Message 22018)
Posted 8 Mar 2010 by Skysnake
Post:
Isn´t it stuped as hell to ignore a platform like boinc? I mean, just look on Einstein@home oder so they have 200TFlops WITHOUT GPU support!

I´am a littel bit angry when i hear this. When CERN would save money for the Serverfarm, they could spend this money for other thinks like more stuff support or something like that. And i think there are great chances with GPU´s for the future.

My own next project is OpenCL. The only problem for me is atm, that i have Diplom assessments this and perhaps next year -.- So time is very limited.

But i see the problem too often, that the people don´t see the chances in BOINC etc. They are to much in the old fashion with big serverfarms etc... I can´t unterstand, why they disclaim this often? But perhaps i can talk with my Prof after the semesterbreak a littelbit about the servers for the LHC. Perhaps he was it with the PS3 ;) Lindenstruth is his name. Do you know him?
10) Message boards : Number crunching : Two announcements (Message 21996)
Posted 8 Mar 2010 by Skysnake
Post:

I am unclear to your question as well.

The LHC@home project uses a programme called SixTrack. This models the beams within the LHC machine. The beam studies we are doing now are related to the upgrade of this machine in 2 years.

The project currently has no plan to run data coming from the 4(6) experiments that use the LHC machine. This is for twofold reasons:

1) The code is not portable, it is unwieldy and only runs on Scientific Linux CERN edition and some related platforms.

Ok that´s a reason. I know it is much work to write a programme, that have a good performance. But you have lot´s of people, that can writw programms. Also when you have a good algorit then you can take it normaly easy to another platform. I study in Heidelberg, and when i´am remeber write, then we have there a serverfarme. And this servers are nerly totaly normal PC´s.

2) Each datasets is in the 100s of GigaBytes range transferring this and the results back and forth between us and you volunteers is just not feasible. Theoretically you guys could store an individual dataset however we would have to store them all which is 15PB of data each year.

Ähm... do you realy think that´s right? When I remember right, then the MAXIMUM of datacollection of a project is about 1GB/s. Yes i know from the detector comes much more, but this dataset is filtered by hardware chips. Some of them are developed in Heidelberg.

Yes you need about 15PB to star ALL Datas per year. But not only for one project and 10 times not for one event. I think something of about some MB per event are realistic. Ok Let say you need a lot of detector parameters etc, but then you also just have a few GB. And this parameters are nearly constant.


Sorry, but for me it sounds a bit like a excuse. I study in Heidelberg and see the BW-Grid servers etc and know which hardware they have and whot not.

This servers always have a very fast network (myrinet or so) and they have often lot´s of ram. But this is a optimal setup to solve the problems.

I study physik, and so i think like in the history also today lot´s of cumputing power is need to solve the flith path of the particels. But just for such problems GPU´s should be VERY powerful.

Ok let´s stop here. All in all i can´t understand why you ignore the community. Alone when i think how much money you can save when you haven´t to buy a lot of Servers or pay for current/cooling.

When the CERN isn´t interested realy in our help, then just say it, but don´t say, that our hardware is to bad for the work!



©2022 CERN