Message boards : Number crunching : The Great LHC Hold-Up
Message board moderation

To post messages, you must log in.

AuthorMessage
Jayargh

Send message
Joined: 24 Oct 04
Posts: 79
Credit: 257,762
RAC: 0
Message 8196 - Posted: 28 Jun 2005, 23:46:58 UTC
Last modified: 28 Jun 2005, 23:57:45 UTC

I believe we have been waiting these 2 weeks for new work when sooooo much was promised because of the "0" problem on different platforms.... but maybe I was wrong.... Twould be a nice gesture from the physicists to explain to us laymen what happened in a few short words even. As being a general partner in this collabaration a bit of info is always appreciated :) [Edit] And oh thank-you much Markku for the update on the when.
ID: 8196 · Report as offensive     Reply Quote
Matt3223

Send message
Joined: 1 Oct 04
Posts: 1
Credit: 2,941
RAC: 0
Message 8279 - Posted: 2 Jul 2005, 16:41:29 UTC
Last modified: 2 Jul 2005, 16:43:22 UTC

I found this interesting to read on what has been going on at CERN concerning LHC@home.

explains what was going on,and where things are going......

kinda long, just scroll through the sections that aren't of interest., but readable.

Christian Søttrup and Jakob Pedersen worked furiously all spring and summer to get SixTrack and BOINC to function together. You can read their thesis, which describes the opportunities for combining public resource computing, such as LHC@home, with Grid computing like the LHC Computing Grid.


Here's the link to the paper:

http://www.fatbat.dk/thesis/

click on the THesis PDF download link to read it.
ID: 8279 · Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 2 Sep 04
Posts: 545
Credit: 148,912
RAC: 0
Message 8289 - Posted: 2 Jul 2005, 21:02:16 UTC

Thanks, added it to the Wiki :)

It is in the section of the site on LHC@Home, though, that listing is under the Beta Projects as they have nt yet admitted they are doing production ... at least that is what the first page always tells me ...
ID: 8289 · Report as offensive     Reply Quote
Digitalis
Avatar

Send message
Joined: 2 Sep 04
Posts: 19
Credit: 26,799
RAC: 0
Message 8309 - Posted: 5 Jul 2005, 12:09:25 UTC - in response to Message 8279.  
Last modified: 5 Jul 2005, 12:17:51 UTC

> Here's the link to the paper:
>
> http://www.fatbat.dk/thesis/
>
> click on the THesis PDF download link to read it.
>
Yes lots of interesting reading there. This caught my eye. No need for optimised apps for LHC I think.

"The LHC simulation that was chosen to be the basis for LHC@home was SixTrack. SixTrack
has been developed over the last 20 years by Frank Schmidt, and is used by CERN and other
synchrotrons3 around the world to do beam studies. Like much of CERN's older software it
is written in Fortran. On the initiative of Eric McIntosh it is also used as part of the SPEC4
floating point benchmark. So CERN, when buying new computers, never have to estimate
how well a new processor will run SixTrack, but can look it up directly. Another added benefit
is that manufacturers will optimize their hardware and compilers to get the best benchmark
score, so modern CPUs and compilers are directly optimized for SixTrack performance."
Get BOINC WIKIed

ID: 8309 · Report as offensive     Reply Quote
Digitalis
Avatar

Send message
Joined: 2 Sep 04
Posts: 19
Credit: 26,799
RAC: 0
Message 8311 - Posted: 5 Jul 2005, 12:16:17 UTC - in response to Message 8309.  
Last modified: 5 Jul 2005, 12:17:01 UTC

Double post
Get BOINC WIKIed

ID: 8311 · Report as offensive     Reply Quote
Profile The Gas Giant

Send message
Joined: 2 Sep 04
Posts: 309
Credit: 715,258
RAC: 0
Message 8318 - Posted: 5 Jul 2005, 23:41:16 UTC
Last modified: 5 Jul 2005, 23:43:37 UTC

More interesting snippets.....

"At the time we started this analysis, an average WU would take about one hour to compute on a 2 GHz P4 machine. This number is also highly influenced by the options the physicist specify. During our work with the application we had to change Fortran compilers many times, sometimes just versions and sometimes also the compiler manufacturer, and each time the execution time for a WU became smaller. The version we finally used for the public had almost halved the time an average WU takes."

"LHC@home delivered more than 600,000 finished jobs to SixTrack during our public run. For a comparison CPSS has delivered 200,000 finished jobs during its lifetime, which is more than a year. This means that LHC@home has delivered more than three times the amount of results in a quarter of the time as CPSS has. Furthermore the complexity of the jobs run on LHC@home has been much higher."

"At the end of the project, we experienced a lot of performance issues with the database server, so the LHC@home hardware will also be upgraded during this period. The newest types of server machines are not put directly into production use at CERN, but are tested for a period of time to measure the stability. Luckily, the LHC@home project has been chosen to be a part of this test, so we can exchange our deprecated servers with some of the newest and fastest, and we hope this will increase the amount of users the system is able to handle."

"The positive result of the project which was initially an experiment led by a small sub group of the IT division at CERN has impressed many people at CERN. So the decision has been taken to make BOINC a more permanent facility at CERN. BOINC will probably be implemented as one of the services the IT division offers. This means that an experiment can approach the IT division and have their simulations ported and run through the IT divisions BOINC servers."

"For a comparison LCG-2 consists of approximately 10,000 CPUs, whereas LHC@home could deliver the equivalent of 360 dedicated CPUs and it was only so small because our server capacity had limited the amount of users to 6000. We believe that it is possible to extend the size of the user base to at least the 500,000 users that SETI@home has."

These were some of the postive points in my skimming of the thesis....there were some not so positive points (but who wants to hear about those....) ;)

Live long and crunch.

Paul
(S@H1 8888)
BOINC/SAH BETA
ID: 8318 · Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 2 Sep 04
Posts: 545
Credit: 148,912
RAC: 0
Message 8319 - Posted: 6 Jul 2005, 3:26:51 UTC - in response to Message 8318.  
Last modified: 6 Jul 2005, 3:27:36 UTC

> These were some of the postive points in my skimming of the thesis....there
> were some not so positive points (but who wants to hear about those....) ;)

Me ...

I think that knowing the good and bad is important. The difficulty we find ourselves in so often is that it is so easy to let a minor bit of froth to taint the whole soup ...

That is why I tried to write that eloquent little number on SETI@Home as we start the monthly BOINC is not as good as classic because ...
ID: 8319 · Report as offensive     Reply Quote
ric

Send message
Joined: 17 Sep 04
Posts: 190
Credit: 649,637
RAC: 0
Message 8322 - Posted: 6 Jul 2005, 7:30:50 UTC - in response to Message 8319.  

...
> ...start the monthly BOINC is not as good as classic because ...
>

Hm really?

I see some difficulties to do LHC work with the "seti@classic client" ;)

Boinc is basically not as bad as "some" people want to make believe us.

There is more than only good or bad..


It would be interesting to turn the cow and list why "boinc" is "better" overall seen.

ID: 8322 · Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 2 Sep 04
Posts: 545
Credit: 148,912
RAC: 0
Message 8325 - Posted: 6 Jul 2005, 12:19:35 UTC

I was just saying, it seems every couple weeks, roughly once a month, we get a thread yelling that SETI@Home Classic is perfectly fine and that they hate BOINC because ...

So, we try to find out why they think BOINC is so bad, and it USUALLY boils down to the fact that it is different and does not appear to be as "easy" to use.

The fact that the Classic application was a bear to run unless you had one or two "helper" programs is ignored. The fact that you had almost no information about the system and how it was performing is ignored. The perception that BOINC is less reliable is usually due to the fact that we now have greater visibility as to what is going on is used to "prove" Classic is better.

Mostly it is because they don't see any reason for change. And most people don't like change that much.
ID: 8325 · Report as offensive     Reply Quote

Message boards : Number crunching : The Great LHC Hold-Up


©2024 CERN