What is LHC@home?

This a research project that uses Internet-connected computers to advance Particle and Accelerator Physics. Participate by downloading and running a free program on your computer. By default, you can run the classic LHC@home application Sixtrack, for simulations of accelerator physics, and help researchers at CERN to improve the LHC.

Other LHC@home simulations that utilizes virtualization to run applications for Theory and experiment simulations for ATLAS, CMS and LHCb are also available.

Please note that some of the applications on LHC@home requre Virtual Box to be installed. Please visit the LHC@home information site for more information. If you have any problems or questions please visit the Message Boards, Questions and Answers and FAQ.

Join LHC@home

User of the Day

User profile Profile Laguna
I started to work for distributed computing in 1997 with the SETI@Home project. Afterwards I took part in a lot of projects. In 2003 I joined the...


Many queued tasks - server status page erratic
Due to the very high number of queued Sixtrack tasks, we have enabled 4 load-balanced scheduler/feeder servers to handle the demand. (Our bottleneck is the database, but several schedulers can cache more tasks to be dispatched.)

Our server status page does not currently show in real time the daemon status on remote servers. Hence the server status page may indicate that server processes like feeder on boincai12 is down, while it is ok.

Please also be patient if you are not getting tasks for your preferred application quickly enough. After a few retries, there will be some tasks. Thanks for your understanding and happy crunching!

---the team
21 Aug 2019, 11:41:17 UTC · Discuss

CMS@Home disruption, Monday 22nd July
I've had the following notice from CERN/CMS IT:

>> following the hypervisor reboot campaign, as announced by CERN IT here: https://cern.service-now.com/service-portal/view-outage.do?n=OTG0051185
>> the following VMs - under the CMS Production openstack project - will be rebooted on Monday July 22 (starting at 8:30am CERN time):
>> | vocms0267 | cern-geneva-b | cms-home

to which I replied:
> Thanks, Alan. vocms0267 runs the CMS@Home campaign. Should I warn the volunteers of the disruption, or will it be mainly transparent?

and received this reply:
Running jobs will fail because they won't be able to connect to the schedd condor_shadow process. So this will be the visible impact on the users. There will be also a short time window (until I get the agent restarted) where there will be no jobs pending in the condor pool.
So it might be worth it giving the users a heads up.

So, my recommendation is that you set "No New Tasks" for CMS@Home sometime Sunday afternoon, to let tasks complete before the 0830 CST restart. I'll let you know as soon as Alan informs me that vocm0267 is up and running again
17 Jul 2019, 13:14:12 UTC · Discuss

Native ATLAS and Theory applications require a CVMFS configuration update
Volunteers running ATLAS native and/or Theory native are kindly asked to update their local CVMFS configuration. Please see the following post for the details.
5 Jul 2019, 8:06:20 UTC · Discuss

Server downtime
Our BOINC servers were unavailable from 13:45 to 15:30 CET this afternoon due to a problem with a shared storage cluster. This explains possible download/upload errors from your clients.

Sorry for the trouble and happy crunching.
26 Jun 2019, 13:53:13 UTC · Discuss

killing extremely long SixTrack tasks
Dear all,

we had to kill ~10k WUs named:


due to a mismatch between the requested disk space and that actually necessary to the job.
These tasks would anyway be killed by the BOINC manager at a certain point with an EXIT_DISK_LIMIT_EXCEEDED message - please see:
for further info.

These tasks cover 10^7 LHC turns, a factor 10 larger than usual, with files building up in dimension until the limit is hit.

The killing does not involve all tasks with such names - I have killed only those that should cover the stable part of the beam; these tasks are expected to last long and hence reach the limit in disk usage. The other WUs should see enough beam losses that the limit is not reached - please post in this thread if this is not the case. The cherry-picking killing was done in the effort of preserving as much as possible tasks already being crunched or pending validation.

As soon as you update the LHC@project on your BOINC manager you should see the task being killed.

We will resubmit soon the same tasks, with appropriate disk requirements.
Apologies for the disturbance, and thanks for your understanding.
18 Jun 2019, 16:49:08 UTC · Discuss

... more

News is available as an RSS feed   RSS

©2019 CERN