Message boards : Number crunching : Thinking about our new admins
Message board moderation

To post messages, you must log in.

AuthorMessage
River~~

Send message
Joined: 13 Jul 05
Posts: 456
Credit: 75,142
RAC: 0
Message 15540 - Posted: 18 Nov 2006, 19:51:30 UTC

We, the participants, are a victim of our own outstanding success. We have proved to the LHC engineering community, and to the European particlae physics community generally, that we can provide an enormously valuable resource. This project was originally budgeted to be closed after the original design runs were complete - that would have been around Xmas 2005. We are still open as the engineers keep finding work for us to do. We are still open as we have convinced everyone that the project is worth supporting.

At present, this project has caretaker admins who work for CERN, but who do not have time to do much more than firefighting. The job will be transferred "soon" to a UK team based at Queen Mary, University of London (QMUL, but also seen abbreviated as QMC or QMW due to various name changes inthe past) (and nothing to do with QMC@home, a totally distinct BOINC project).

Even when the new admins are in post, they will take a while to learn their way around the server, and then take a while to sort out the problems before moving on to the wish lists. Sometime in all of that, they will also be introducing a new series of work, called Garfield, at which time the typical amount of work we get will go up again.

There is said to be no truth in the rumour that Garfield is the reason we are called lhcathome.

We heard recently that the incoming admins had met the outgoing / caretaker admnins at CERN, Geneva.

Next job, iirr, is to move the servers to the UK, then sometime after that the incoming admins get their hands on the machines and start learning the job. So it is happening, and it will all come in due course.

We should not, in my opinion, complain at the current low level of support - it is better than the project having been retired. Also, as I have said on several other threads, if we present a lot of grumpy demands to the incoming admins on their first official day in the job, they are only human and are not likely to respond quite as well as if we are welcoming and friendly to them.

We should of course feel free to make suggestions, here and on other threads, but from an understanding that there will be nobody to implement them for quite a while. Once they are properly in post I am sure they will be as keen to get improvements made as we are to suggest them.

River~~
ID: 15540 · Report as offensive     Reply Quote
PovAddict
Avatar

Send message
Joined: 14 Jul 05
Posts: 275
Credit: 49,291
RAC: 0
Message 15545 - Posted: 18 Nov 2006, 20:18:35 UTC

I agree completely with you. I want to be helpful with the project, and owning my own project I know how to do some things (like enabling file_deleter or exporting stats) so I can help some more :) And, well, the new admins could ask in the BOINC mailing lists (specifically boinc_projects) where there will be many people with such experience (other project administrators and developers).

ID: 15545 · Report as offensive     Reply Quote
PovAddict
Avatar

Send message
Joined: 14 Jul 05
Posts: 275
Credit: 49,291
RAC: 0
Message 15546 - Posted: 18 Nov 2006, 20:32:07 UTC
Last modified: 18 Nov 2006, 20:34:22 UTC

As the problems, questions and complaints are spread in all the forum, I thought I would summarize here.

New hosts are being created on each scheduler contact (forum threads here and here). This causes database size to increase, and more server load in querying the host table. And users are annoyed to have so many hosts in their lists. Apparently clients think they have host id 0, so server is assigning a new ID each time. But ID is not sent to clients correctly, so they keep having 0. Workaround is looking for the highest host ID you have, and set it manually on client_state.xml. However, not all users would do it, either because they don't look the forums, or don't know how to do it, or don't want to fiddle with files.

Another "problem": lack of XML stats. This seems to be a matter of adding the stats export program to the task list on server, but there is a problem: stats websites will show two million hosts on LHC, due to the previous problem. Literally two million. That's more than twice what SETI has! /stats/host.gz would be extremely big, and would take time to create - more server load. And stats websites downloading it - network load. The hosts problem above has to be solved first. Maybe there is some other reason why they aren't exporting stats.

Last issue: file_deleter not running. This doesn't affect users at all, but the scary number of results waiting to have their files deleted (over 200,000) makes me think the server could have a full disk earlier than expected. Maybe LHC is using an alternative (I'd say manual) way to delete files, not involving the standard file_deleter. In that case, the program they use to delete files should mark them as deleted in the database, to make the number actually go down. Or take the waiting-to-delete count out of server status page so people don't get freaked out :)

As you can see, there are many assumptions / guesses / conclusions from me here. It would be great to have admins reporting how they progress (or why they don't). Informative project = happy crunchers.
ID: 15546 · Report as offensive     Reply Quote
River~~

Send message
Joined: 13 Jul 05
Posts: 456
Credit: 75,142
RAC: 0
Message 15554 - Posted: 18 Nov 2006, 22:18:29 UTC

By the way, PovAddict is too modest to mention his project by name, so I will do it for him: it is Renderfarm@Home. Even tho the application stuff is obviously very different, most of the server side stuff will be the same (except for the Chrulle tweaks, of course).

R~~
ID: 15554 · Report as offensive     Reply Quote
Profile FalconFly
Avatar

Send message
Joined: 2 Sep 04
Posts: 121
Credit: 592,214
RAC: 0
Message 15605 - Posted: 20 Nov 2006, 23:22:28 UTC - in response to Message 15554.  

All those changes spark one big question in my mind :

Will this still be the same Project that we joined a long time ago ?
Same Philosophy, same goals and policies ?

Scientific Network : 45000 MHz - 77824 MB - 1970 GB
ID: 15605 · Report as offensive     Reply Quote
River~~

Send message
Joined: 13 Jul 05
Posts: 456
Credit: 75,142
RAC: 0
Message 15607 - Posted: 21 Nov 2006, 3:41:47 UTC - in response to Message 15605.  
Last modified: 21 Nov 2006, 3:43:35 UTC

All those changes spark one big question in my mind :

Will this still be the same Project that we joined a long time ago ?
Same Philosophy, same goals and policies ?


Continuity and Change.

The goal has been diversified to find us more work. The logistics behind the project is changing too.

The original goal of LHC was to provide computing power for the design of the LHC itself - the machine that accelerates the particles. This goal has been largely fulfilled, and only small amounts of work remain to be done, in small batches. There would be likely to continue to be occasional needs for this work, using the sixtrack application, from time to time when magnets get replaced or when experimental physicists ask the beam physicists to make the beam do something unusual.

If we stuck to that goal, and that goal alone, then the project would continue for many years, but crunching at an even lower rate than at present. A thousand or so hosts would be needed to keep the turnround to under a week when work did need to be run.

Some participants would be happy to remain on a nearly-retired project. Having almost finished our original goal, we'd move on to support other goals by moving to other projects. This would keep the LHC@home name in the stats specific to the design process of the accelerator, and the stats would become largely static, relating to completed work, growing only occasionally as the odd new batch of work is needed.

More (I think) prefer to have a more steady stream of work. Having almost finished our original goal, we move on within the same project to adopt related or similar work so that the LHC@home name continues to relate to a living project with growing stats. This means that the stats continue to grow, but will become progressivle less and less representative of the original design work as time goes on, with relatively small occasional work being done towards accelerator design.

It is this second option that has been adopted, both at the request of participants here and due to the high value the European particle physics community now puts on this project as a computing resource.

The new work will be using an app called Garfield. How well have the project admins done in choosing a goal to diversify into?

Particle physics is about making particles interact at high energies and then working out what happened. Both Sixtrack (the existing app) and Garfield are about design of equipment for use in particle physics.

- Sixtrack is an app for beam physicists (those who design, commission, and run accelertaors that prepare the particles before they collide)

- whereas Garfield is an app for detector physicists (those who design and commission detection equipment that detects the particles produced after the colision)

- both model fields

- in sixtrack it is the effect of external magnetic fields on the particles themselves that is modelled, together with interactions between the particles' own electric fields. By understanding these effects better, higher energies or brighter beams (ie more particles per bunch) can be achieved, making more reactions possible

- whereas in Garfield what is modelled is the effect of electric fields on electrons that are released when charged particles produced in the reaction traverse a gas. These detectors are known as "drift chambers" because a crucial factor is the way the electrons "drift" after release. By understanding this better, the same detector can be used to position the particle more accurately, or a cheaper detector used to achieve the same accuracy.

- Sixtrack is specific to the LHC as it is strongly customised to the layout of the accelerator

- whereas Garfield (if I understand correctly) is flexible and can be used to design detection equipment in various different designs of drift chamber. The drift chambers it models could be intended for use at any particle physics facility, not just the LHC. No douubt, at least at first, Garfield will be used with LHC experiments in mind, but this is a political not a technical restriction.

- The design decisions made by sixtrack affect all the experiments at the LHC (if they affect beam properties) or affect none (producing the same beam properties more economically).

- whereas only some experiments at the LHC will want to use drift chambers (other kinds of detectors are more suitable for some experiments). This means that Garfield will be totally irrelevant for some experiments.

- the beam physicists using the Sixtrack results work for CERN itself as they are building/running the facility

- whereas the detector physicists using the Garfield results work for the experimental consortium that is using one of the five(?) interaction areas provided by the LHC. These consortia are formed by international collaborations of universities.

And finally, getting away from the software, up to July of this year the project admin, Chrulle, was someone employed at CERN and employed by some department within CERN.

Whereas, from January, the project admin will be somone employed at QMUL and employed by GridPP - a UK organisation that coordinates an already existing grid of computers amongst UK particle physicists.

The UK is one of the founders and supporters of CERN, and lhcathome will still be available to particle physicists internationally, the admin support now forming part of the UK's practical contribution to CERN.

There are significant continuities and significant changes, both in the technical application of the project and in the logistics that keep it going. These changes and continuities are bound to be reflected in the philosophies, goals, and policies of the project.

But the idea that we could go back to the way things were at the peak of LHC production is an illusion; we have worked ourselves out of that job by our own success. In my opinion: the GridPP handover and the Garfield application offer something similar to the original LHC@home philosophies, goals, and policies; something than we might have got; and the CERN handover team have done well to identify as close a match as they have.

It will certainly be closer to the original LHC@home than any other distributed computing project available in January 2007. Not all the changes will be negative - my hope is that an organisation whose primary mission is distributed computing will bring its own benefits to the admin role.

R~~
ID: 15607 · Report as offensive     Reply Quote
Profile Ben Segal
Volunteer moderator
Project administrator

Send message
Joined: 1 Sep 04
Posts: 139
Credit: 2,579
RAC: 0
Message 15620 - Posted: 22 Nov 2006, 11:20:54 UTC

Just to say that I greatly appreciate the thought and helpful comments from such experienced people as you, who are contributing to this thread (and others!). I will try to make sure that the new admins take your postings into account, particularly the buglist from PovAddict in Message 15546.

Thanks again, River, FalconFly and others!

Ben Segal / LHC@home
ID: 15620 · Report as offensive     Reply Quote

Message boards : Number crunching : Thinking about our new admins


©2024 CERN