<?xml version="1.0" encoding="UTF-8" ?>
        <rss version="2.0">
        <channel>
        <title>LHC@home: News</title>
        <description>LHC@home: News</description>
        <link>https://lhcathome.cern.ch/lhcathome/</link>
        <copyright>CERN</copyright>
        <lastBuildDate>Tue, 10 Mar 2026 15:13:35 GMT</lastBuildDate>
        <language>en-us</language>
        
    <item>
        <title><![CDATA[Downtime for database upgrade]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6463</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6463</guid>
        <description><![CDATA[
There will be some server downtime Wednesday morning from 9 UTC due to a database upgrade.
]]></description>
        <pubDate>Tue, 10 Mar 2026 15:13:35 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Server Upgrade - Monday 26th January 2026]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6437</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6437</guid>
        <description><![CDATA[
On Monday at 13:30 CET, the server will be upgraded the latest release. This will involve downtime since the data base will need to be upgraded.
]]></description>
        <pubDate>Fri, 23 Jan 2026 14:25:47 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[New application Xtrack]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6387</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6387</guid>
        <description><![CDATA[
As mentioned in <a href="https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6378&postid=52142" rel="nofollow">this thread</a>, we finally have the new Xtrack BOINC application available for beta testing. <br />
<br />
Please refer to the Sixtrack/Xtrack forum for further details and feedback discussion.  Tasks should hopefully be available by tomorrow.<br />
<br />
<i>The LHC@home team</i>
]]></description>
        <pubDate>Tue, 16 Sep 2025 16:11:37 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Maintenance downtime]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6309</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6309</guid>
        <description><![CDATA[
Our BOINC services will be unavailable for a while this morning between 8 and 9AM CET for a database upgrade.
]]></description>
        <pubDate>Tue, 11 Mar 2025 06:16:13 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Seasons greetings]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6266</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6266</guid>
        <description><![CDATA[
Merry Christmas and a peaceful end of the year from the LHC@home team!<br />
<br />
Many thanks to all our volunteers for your crunching and contributions!<br />
<br />
Here are some <a href="https://home.web.cern.ch/news/news/cern/cern-highlights-2024-celebrating-70-years" rel="nofollow">highlights from 2024 at CERN.</a><br />
<br />
Greetings and <a href="https://videos.cern.ch/record/2301141" rel="nofollow">best wishes for 2025</a>!
]]></description>
        <pubDate>Fri, 20 Dec 2024 12:18:57 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Downtime 1 October]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6220</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6220</guid>
        <description><![CDATA[
LHC@home BOINC servers will be down for a while later today due to a database intervention.
]]></description>
        <pubDate>Tue, 01 Oct 2024 08:02:24 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[20th BOINC Workshop May 29-31]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6144</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6144</guid>
        <description><![CDATA[
The registration is now open for the 2024 BOINC Workshop -- this year in person, at CERN.<br />
<br />
https://indico.cern.ch/event/1379525/overview<br />
<br />
Please register if you plan to attend.
]]></description>
        <pubDate>Mon, 22 Apr 2024 09:21:39 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[BOINC Needs Votes at a Upcoming UN  Forum]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6129</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6129</guid>
        <description><![CDATA[
BOINC is a finalist for an notable award, and needs your vote (*by Sunday): <br />
<br />
Context: The <a href="https://www.itu.int/net4/wsis/forum/2024" rel="nofollow">World Summit on the Information Society (WSIS)</a> is a United Nations-sponsored initiative aimed at harnessing the potential of information and communication technologies to build inclusive and equitable information societies worldwide.  BOINC has been nominated for <a href="https://www.itu.int/net4/wsis/stocktaking/Prizes/2024" rel="nofollow">a prize at the 2024 forum</a>, and has passed initial hurdles; the next and last step ("Phase 3") requires public votes.  The award would be a very nice boost and validation for BOINC and all our projects; if we can get our communities to vote, we should have a decent shot at this point...<br />
<br />
Voting is pretty simple, takes just a few minutes; instructions are <a href="https://docs.google.com/document/d/1x9Xi3tq7Y9dlDD0Xb0Ul0yYCXct5pDqIqssqARxvrXg/edit?usp=sharing" rel="nofollow">here</a>.<br />
<br />
<i>    (*The deadline for votes is Sunday: 31 March 2024, 23:00 UTC+02:00)</i>
]]></description>
        <pubDate>Mon, 25 Mar 2024 14:59:22 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Seasons greetings]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6082</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6082</guid>
        <description><![CDATA[
Many thanks to all our volunteers for your contributions to LHC@home over the last year!<br />
<br />
Some <a href="https://videos.cern.ch/record/2299435" rel="nofollow">highlights from CERN during 2023 can be seen in this video</a>. <br />
<br />
The LHC@home team wishes you a Merry Christmas, restful holidays and all the best for 2024!
]]></description>
        <pubDate>Fri, 22 Dec 2023 07:18:09 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Theory application reaches 6 TRILLION events !!]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6048</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6048</guid>
        <description><![CDATA[
Bravo to all Theory crunchers !!!
]]></description>
        <pubDate>Fri, 29 Sep 2023 09:38:49 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Downtime Monday 18th]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6040</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6040</guid>
        <description><![CDATA[
The LHC@home BOINC servers will be degraded on Monday 18th of September due to a database upgrade.  BOINC clients are likely to generate errors when trying to download or upload tomorrow morning.<br />
<br />
Thanks for your contributions and happy crunching!
]]></description>
        <pubDate>Sun, 17 Sep 2023 16:06:00 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Server upgrade]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5956</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5956</guid>
        <description><![CDATA[
The LHC@home BOINC servers have been upgraded to the latest server release, 1.4.2.
]]></description>
        <pubDate>Tue, 24 Jan 2023 07:45:31 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Seasons greetings]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5942</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5942</guid>
        <description><![CDATA[
Many thanks to all our volunteers for your contributions to LHC@home over the last year!<br />
<br />
We in the LHC@home team wish you a Merry Christmas and restful holidays.
]]></description>
        <pubDate>Wed, 21 Dec 2022 09:58:39 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Best wishes for 2022]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5779</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5779</guid>
        <description><![CDATA[
The LHC@home team wishes you all a Happy New Year and all the best for 2022!<br />
<br />
The simulations carried out under LHC@home contribute to improvements to the LHC accelerator as well as the experiments. The upcoming Run 3 of the LHC shall start soon.<br />
<br />
Meanwhile you can take a look at <a href="https://youtu.be/R11VyvT8gzY" rel="nofollow">this video with highlights</a> from CERN during 2021.<br />
<br />
Many thanks for your contributions and happy crunching!
]]></description>
        <pubDate>Wed, 05 Jan 2022 12:29:14 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Lack of CMS tasks due to a problem in WMAgent development]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5765</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5765</guid>
        <description><![CDATA[
Unfortunately, I have been unable to submit new workflows to the CMS project since yesterday, and the job queues have now drained.<br />
The cause is a change introduced in the development of the CMS work-flow management system.  These changes are tested first on a development system before being moved to the production system.  We currently use the development system to run CMS@Home, so the change is impacting us.<br />
I'm trying to find out when a fix will be forthcoming, but until then set No New Tasks for CMS or switch to another project.<br />
I'm sorry about this.  I will let you know when I am able to submit jobs again.
]]></description>
        <pubDate>Thu, 09 Dec 2021 15:15:02 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Power Outage]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5715</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5715</guid>
        <description><![CDATA[
There is currently a power outage in the CERN computer centre. LHC@home services may be affected.
]]></description>
        <pubDate>Thu, 26 Aug 2021 15:11:40 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CMS App Downtime]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5714</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5714</guid>
        <description><![CDATA[
Due to an issue with the authentication service used by the CMS App, the job queue has been paused. New jobs will be sent again once the issue has been resolved.
]]></description>
        <pubDate>Thu, 26 Aug 2021 12:20:40 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CMS job queue to drain this weekend (21/08/2021)]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5709</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5709</guid>
        <description><![CDATA[
Oops, I posted this twice to the -dev board, not once there and again here -- sorry!<br />
<blockquote>CMS is about to release a new version of WMAgent based entirely on python 3. They have asked that they be able to update our agent by Monday evening (23/08), so I will not inject any new workflows before the upgrade. I expect the job queue to drain by late on Sunday.<br />
Please set your CMS application to no new tasks by then.</blockquote>
]]></description>
        <pubDate>Sat, 21 Aug 2021 00:10:34 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[2021 BOINC Workshop]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5675</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5675</guid>
        <description><![CDATA[
Videos of the talks from the <a href="https://www.boincworkshop.org/" rel="nofollow">2021 BOINC Workshop</a> are now available on <a href="https://www.youtube.com/c/BOINCWorkshop/playlists" rel="nofollow">YouTube</a>. Day 01 includes a talk giving an overview of LHC@home and Day 02 has another talk which provides more details on the specific technology we use. There are many other interesting talks from the other BOINC projects and from the BOINC developers.
]]></description>
        <pubDate>Thu, 27 May 2021 12:45:13 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Database issues]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5629</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5629</guid>
        <description><![CDATA[
Our database cluster is heavily loaded today, and LHC@home services time out from time to time. Our DBA is  trying to fix this.  Sorry for the trouble and happy crunching.
]]></description>
        <pubDate>Mon, 29 Mar 2021 16:43:57 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Downtime Wed 20/1]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5587</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5587</guid>
        <description><![CDATA[
LHC@home servers will be down for a database upgrade tomorrow Wednesday 20th of January early afternoon GMT.<br />
<br />
Sorry for the inconvenience, and happy crunching!
]]></description>
        <pubDate>Tue, 19 Jan 2021 13:54:45 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Best wishes for 2021]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5579</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5579</guid>
        <description><![CDATA[
We would like to warmly thank all our volunteers for your contributions during 2020! <br />
<br />
The LHC@home team also wishes you a Happy and hopefully healthy 2021!<br />
<br />
For those interested, please find <a href="https://home.cern/news/news/knowledge-sharing/relive-2020-cern" rel="nofollow">some highlights of CERN activities</a> during 2020 and <a href="https://home.cern/news/news/physics/cms-sets-new-bounds-mass-leptoquarks" rel="nofollow">recent findings from the CMS experiment</a> on our web pages.
]]></description>
        <pubDate>Tue, 05 Jan 2021 12:19:30 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Possible upload delays Wednesday 14/10]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5530</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5530</guid>
        <description><![CDATA[
Due to an upgrade of our Ceph storage on Wednesday 14th of October, there might be delays to file uploads and data assimilation. Should be back to normal by Wednesday evening.
]]></description>
        <pubDate>Tue, 13 Oct 2020 13:58:48 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[BOINC database downtime]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5509</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5509</guid>
        <description><![CDATA[
The LHC@home database will be down for a while this morning, due to a network interruption in the CERN data centre. Hence scheduler requests and uploads will fail for a while.  <br />
<br />
Sorry for the trouble and happy crunching.
]]></description>
        <pubDate>Mon, 07 Sep 2020 06:22:24 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Interruption to CMS@Home, Wednesday 15th July]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5478</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5478</guid>
        <description><![CDATA[
We need to interrupt the CMS project tomorrow to deploy a new Workflow Management Agent.  This means that jobs will not be available from sometime tonight.  We recommend that you set your CMS machines to No New Tasks as soon as possible, to avoid tasks terminating with an error if a job can't be fetched.<br />
We anticipate jobs will be available again late Wednesday (European time).  I'll update this thread when it is OK to proceed.
]]></description>
        <pubDate>Tue, 14 Jul 2020 13:19:28 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CMS job rundown]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5469</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5469</guid>
        <description><![CDATA[
We need to do some tests of a patch to fix a bug that's been plaguing us for some time.  To this end, I am letting the job queues drain, so there will be an absence of CMS jobs -- perhaps as soon as tomorrow morning, depending on how we continue to recover from today's Oracle quota problem.<br />
So, be prepared to set No New Tasks as soon as you see any sign of lack of jobs -- or sooner if you prefer.<br />
I don't know how long the testing will take, there are many factors at work (if the BOINC server sees that there are no jobs available, it will stop sending tasks; that will mean it takes longer for each test batch to be recognised and start serving jobs).
]]></description>
        <pubDate>Wed, 01 Jul 2020 17:51:10 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Downtime Saturday]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5462</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5462</guid>
        <description><![CDATA[
The database underlying LHC@home will be down on Saturday 27th of June due to an upgrade of a DB storage rack.<br />
<br />
Hence LHC@home BOINC services will be unavailable for a good part of the day. (Est 5:30-12:30 UTC)<br />
<br />
So your BOINC client connections to our servers are likely to fail on Saturday.<br />
<br />
Thanks and happy crunching!
]]></description>
        <pubDate>Thu, 25 Jun 2020 09:11:51 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Server outage Wednesday]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5438</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5438</guid>
        <description><![CDATA[
Due to a failure of a database storage system, a number of database services at CERN failed on Wednesday afternoon. The LHC@home servers were affected as well, as the BOINC database was unavailable and requests timed out. <br />
<br />
Sorry for the trouble and happy crunching!
]]></description>
        <pubDate>Wed, 27 May 2020 17:59:14 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CERN and COVID-19]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5407</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5407</guid>
        <description><![CDATA[
Like many organisations, CERN is also affected by the COVID-19 pandemic. Researchers in the CERN community are trying to help out in different ways, as explained on <a href="https://againstcovid19.cern" rel="nofollow">this web page.</a><br />
<br />
As part of this effort to fight COVID-19, we also contribute computing power to Folding@home and Rosetta@home from temporarily available servers that were about to be decommissioned.<br />
<br />
During periods like this with little work from LHC@home, we also encourage you to participate in other BOINC projects such as Rosetta@home and contribute to the global fight of the pandemic.<br />
<br />
Many thanks for your contributions to LHC@home and continued happy crunching!<br />
<br />
With the best of wishes of good health for your and your families from the LHC@home team.
]]></description>
        <pubDate>Fri, 24 Apr 2020 06:09:07 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Server update]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5394</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5394</guid>
        <description><![CDATA[
The BOINC daemons will be down for minor server update this afternoon. This is to bring our environment to the latest minor server release.
]]></description>
        <pubDate>Tue, 14 Apr 2020 09:35:58 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Theory application reaches 5 TRILLION events !!]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5387</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5387</guid>
        <description><![CDATA[
<b>LHC@home's Theory application</b> will tomorrow pass the milestone of <b>5 TRILLION simulated events</b>. This project, under its earlier name "Test4Theory", began production in 2011 and was the first BOINC project anywhere to use Virtual Machine technology (based on CERN's CernVM system).<br />
<br />
Over the coming weeks we plan to publish some more details about all this on the LHC@home and CERN websites. Our timetables have of course been affected by the Coronavirus disruptions, but we absolutely could not miss announcing and celebrating such a milestone as this.<br />
<br />
<b>The whole LHC@home team sends our sincerest thanks to all our volunteers for enabling this achievement !!</b>
]]></description>
        <pubDate>Thu, 09 Apr 2020 07:02:52 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Downtime Wednesday morning]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5344</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5344</guid>
        <description><![CDATA[
The LHC@home servers will be down tomorrow Wednesday 11th of March from 6AM to 8AM UTC due to a database intervention.<br />
<br />
Hence your BOINC clients may defer uploads or downloads.   Thanks for your patience and happy crunching!
]]></description>
        <pubDate>Tue, 10 Mar 2020 06:54:23 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[LHC@home web site upgrade]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5336</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5336</guid>
        <description><![CDATA[
The LHC@home information site on Drupal that includes the FAQ and other information about the applications running on LHC@home is being upgraded now.<br />
<br />
Hence links to the FAQ and other information pages about LHC@home will be unavailable for a while today.<br />
<br />
The new site will be ready later this afternoon.<br />
<br />
<i>-- The team</i>
]]></description>
        <pubDate>Fri, 06 Mar 2020 10:43:15 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CMS@Home -- ongoing problems]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5309</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5309</guid>
        <description><![CDATA[
Sorry that the CMS@Home HTCondor server is still playing up.  Again over the weekend it refused to serve jobs even though plenty were available.  Together with Federica we've decided not to inject another workflow this week, to let if "fail hard" again so that she can investigate which ClassAd preferences are not being met.<br />
So, you will probably see the number of running jobs falling, and the number of errors increasing, in the next few days.  Please feel free to set No New Tasks in that case.  I won't, so that there is still some pressure for jobs on the server.  I've also asked Laurence if I can run the CMS@Home VM outside of BOINC, to get around the quota back-off problem.
]]></description>
        <pubDate>Wed, 19 Feb 2020 14:07:10 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CMS@Home up again]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5303</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5303</guid>
        <description><![CDATA[
OK, jobs are available again.  Sorry for the long delay.  Remember, I'm only the front-man for a larger crew, so any downstream delays percolate up to my response.  Hopefully this will remain good for some time, but I still don't understand why the condor server occasionally refuses to send out jobs in a timely manner.
]]></description>
        <pubDate>Wed, 12 Feb 2020 11:14:38 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CMS@Home accidentally shut down -- Please set No New Tasks]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5298</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5298</guid>
        <description><![CDATA[
We need to upgrade the CMS@Home WMAgent before Thursday, so I tried to set the workflows to drain down.  Unfortunately, I misunderstood the batch states and killed off most of them instead. <b>:-(</b>.  There's one still left with about 200 jobs, so that won't last long.<br />
Please set your CMS projects to No New Tasks to avoid getting lots of computation errors.  I'll let you know when the upgrade is done and jobs are flowing again.
]]></description>
        <pubDate>Mon, 10 Feb 2020 15:33:39 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Server outage - uploads failing]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5280</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5280</guid>
        <description><![CDATA[
Due to a network problem in the CERN computer centre early Thursday morning, our BOINC servers have lost access to a storage cluster. Hence uploads are failing and access to web pages as well.  Hopefully this should be fixed soon.
]]></description>
        <pubDate>Thu, 23 Jan 2020 07:31:25 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CMS@Home disruption this week]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5220</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5220</guid>
        <description><![CDATA[
It appears that a database intervention at CERN went badly, leaving our data tables empty and us not being able to submit new CMS@Home jobs.  Advice is that it will take several days to recover -- and as well as that some of the major players are in the USA, which has holidays for the rest of this week.  I'll keep an eye on it, but I'm doubtful we'll be running again this week.  Sorry 'bout that!<br />
Happy Thanksgiving...
]]></description>
        <pubDate>Wed, 27 Nov 2019 08:21:06 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Database intervention Monday morning]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5211</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5211</guid>
        <description><![CDATA[
LHC@home and associated BOINC services will be unavailable for about 1 hour on Monday 25th of November due to a database storage intervention.<br />
<br />
Thanks for your understanding and happy crunching.
]]></description>
        <pubDate>Fri, 22 Nov 2019 13:38:09 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CMS job shortage Wednesday 13th November]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5199</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5199</guid>
        <description><![CDATA[
CMS IT will be installing a new version of WMAgent on Wednesday.  This will impact job availability for the duration of the intervention.  We might be able to eliminate the little gremlin that's been plaguing us for the last few weeks, too.<br />
So, please set your CMS processors to <font color="red">No New Tasks</font> sometime tomorrow, Tuesday 12th, so that current tasks will stop requesting new jobs before the queues get cut.  I'll let you know when jobs are available again.<br />
Thanks.
]]></description>
        <pubDate>Mon, 11 Nov 2019 15:49:27 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Server upgrade]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5153</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5153</guid>
        <description><![CDATA[
Following a couple of weeks of tests in the LHC@home development project, we are upgrading our production server cluster to BOINC server release 1.2 this afternoon.  During the update we will be running with slightly lower server capacity than usual.
]]></description>
        <pubDate>Mon, 30 Sep 2019 11:56:49 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[The SixTrack team at the LHC@Home desk for the CERN open days]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5138</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5138</guid>
        <description><![CDATA[
Dear volunteers,<br />
<br />
thanks to those who have filled in the doodle we circulated last week:<br />
<a href="https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5130&postid=39794#39794" rel="nofollow">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5130&postid=39794#39794</a><br />
<br />
We decided to deliver a presentation every day in the most populated time slots out of the doodle poll, i.e. on Sat. 14th Sep, between 03:00 and 04:00 PM, and on Sun 15th Sep, between 02:00 and 03:00 PM.<br />
The meeting point will be the LHC@Home desk in R2 (building 504), at the beginning of the time slot. We will have to walk few minutes to a meeting room where there will be the presentations. We will be back at the meeting point by the end of the time slot at the latest.<br />
<br />
Looking forward to shaking hands and meeting you,<br />
Alessio and Massimo, for the SixTrack team
]]></description>
        <pubDate>Thu, 12 Sep 2019 11:53:15 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[The SixTrack team welcomes the LHC@Home volunteers at the CERN open days]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5130</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5130</guid>
        <description><![CDATA[
Dear volunteers,<br />
<br />
following Nils's post on the MBs:<br />
<a href="https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5129&postid=39763#39763" rel="nofollow">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5129&postid=39763#39763</a><br />
the SixTrack team is looking into welcoming you at CERN and greet you for the CPU time you make available to us. To do so in the best way, we would like to know when you will be most likely passing by the IT stand, such that we concentrate our efforts on the time when most of you can be there. Hence, please find below a doodle that we will use to target the optimal time window:<br />
<a href="https://doodle.com/poll/qpw36awgspufawi7" rel="nofollow">https://doodle.com/poll/qpw36awgspufawi7</a><br />
<br />
Thanks a lot in advance, and happy crunching!<br />
Alessio and Massimo, for the SixTrack team
]]></description>
        <pubDate>Mon, 02 Sep 2019 08:33:56 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CERN Open Days in 2 weeks!]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5129</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5129</guid>
        <description><![CDATA[
During the <a href="https://opendays.cern/" rel="nofollow">CERN Open Days 2019</a>, we will have a small <a href="https://it-opendays.web.cern.ch/volunteer-computing" rel="nofollow">LHC@home stand</a> as part of the <a href="https://it-opendays.web.cern.ch/activities" rel="nofollow">IT activities</a> in building 504 near the Data Centre. <br />
<br />
LHC@home will also be present at the <a href="https://opendays.cern/site/atlas-point-1" rel="nofollow">ATLAS experiment site</a>, in the ATLAS Computing Corner.<br />
<br />
We hope that many of you will be able to visit CERN during the Open Days and would be happy to see you here!<br />
<br />
Please refer to: <a href="https://opendays.cern/plan-your-visit" rel="nofollow">Plan your visit</a> and the <a href="https://opendays.cern/activities" rel="nofollow">list of activities during the Open days</a> for more information about all the visit points on the CERN sites.
]]></description>
        <pubDate>Fri, 30 Aug 2019 09:07:17 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Many queued tasks - server status page erratic]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5119</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5119</guid>
        <description><![CDATA[
Due to the very high number of queued Sixtrack tasks, we have enabled 4 load-balanced scheduler/feeder servers to handle the demand. (Our bottleneck is the database, but several schedulers can cache more tasks to be dispatched.)  <br />
<br />
Our server status page does not currently show in real time the daemon status on remote servers. Hence the server status page may indicate a varying number of processes, depending on which web server is active.<br />
<br />
Please also be patient if you are not getting tasks for your preferred application quickly enough. After a few retries, there will be some tasks.  Thanks for your understanding and happy crunching!<br />
<br />
<i>---the team</i>
]]></description>
        <pubDate>Wed, 21 Aug 2019 11:41:17 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CMS@Home disruption, Monday 22nd July]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5087</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5087</guid>
        <description><![CDATA[
I've had the following notice from CERN/CMS IT:<br />
<br />
<font color="green">>> following the hypervisor reboot campaign, as announced by CERN IT here:  https://cern.service-now.com/service-portal/view-outage.do?n=OTG0051185<br />
>> the following VMs - under the CMS Production openstack project - will be rebooted on Monday July 22 (starting at 8:30am CERN time):<br />
...<br />
>> | vocms0267 | cern-geneva-b     | cms-home<br />
</font><br />
to which I replied:<br />
<font color="blue">>        Thanks, Alan. vocms0267 runs the CMS@Home campaign.  Should I warn the volunteers of the disruption, or will it be mainly transparent?<br />
</font><br />
and received this reply:<br />
<font color="red">Running jobs will fail because they won't be able to connect to the schedd condor_shadow process. So this will be the visible impact on the users. There will be also a short time window (until I get the agent restarted) where there will be no jobs pending in the condor pool.<br />
So it might be worth it giving the users a heads up.<br />
</font><br />
So, my recommendation is that you set "No New Tasks" for CMS@Home sometime Sunday afternoon, to let tasks complete before the 0830 CST restart.  I'll let you know as soon as Alan informs me that vocm0267 is up and running again
]]></description>
        <pubDate>Wed, 17 Jul 2019 13:14:12 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Native ATLAS and Theory applications require a CVMFS configuration update]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5077</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5077</guid>
        <description><![CDATA[
Volunteers running ATLAS native and/or Theory native are kindly asked to update their local CVMFS configuration. Please see the following <a href="https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4758&postid=39281" rel="nofollow">post</a> for the details.
]]></description>
        <pubDate>Fri, 05 Jul 2019 08:06:20 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Server downtime]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5070</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5070</guid>
        <description><![CDATA[
Our BOINC servers were unavailable from 13:45 to 15:30 CET this afternoon due to a problem with a shared storage cluster. This explains possible download/upload errors from your clients.<br />
<br />
Sorry for the trouble and happy crunching.
]]></description>
        <pubDate>Wed, 26 Jun 2019 13:53:13 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[killing extremely long SixTrack tasks]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5064</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5064</guid>
        <description><![CDATA[
Dear all,<br />
<br />
we had to kill ~10k WUs named:<br />
<pre style="white-space:pre-wrap;">w-c*_job*__s__62.31_60.32__*__7__*_sixvf_boinc*</pre><br />
due to a mismatch between the requested disk space and that actually necessary to the job.<br />
These tasks would anyway be killed by the BOINC manager at a certain point with an EXIT_DISK_LIMIT_EXCEEDED message - please see:<br />
<a href="https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5062" rel="nofollow">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5062</a><br />
for further info.<br />
<br />
These tasks cover 10^7 LHC turns, a factor 10 larger than usual, with files building up in dimension until the limit is hit.<br />
<br />
The killing does not involve all tasks with such names - I have killed only those that should cover the stable part of the beam; these tasks are expected to last long and hence reach the limit in disk usage. The other WUs should see enough beam losses that the limit is not reached - please post in this thread if this is not the case. The cherry-picking killing was done in the effort of preserving as much as possible tasks already being crunched or pending validation.<br />
<br />
As soon as you update the LHC@project on your BOINC manager you should see the task being killed.<br />
<br />
We will resubmit soon the same tasks, with appropriate disk requirements.<br />
Apologies for the disturbance, and thanks for your understanding.<br />
A.
]]></description>
        <pubDate>Tue, 18 Jun 2019 16:49:08 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Using a local proxy to reduce network traffic for CMS]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5053</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5053</guid>
        <description><![CDATA[
Thanks to computezrmle, with additional work from Laurence and a couple of CMS experts (and my adding one line to the site-local-config file) there is now a way to set up a local caching proxy to greatly reduce your network traffic.  Each job instance that runs within s CMS BOINC task must retrieve a lot of set-up data from our database.  This data doesn't change very often, so if you keep a local copy the job can access that rather than going over the network every time.<br />
Instructions on how to do this are available at <a href="https://lhcathomedev.cern.ch/lhcathome-dev/forum_thread.php?id=475&postid=6396" rel="nofollow">https://lhcathomedev.cern.ch/lhcathome-dev/forum_thread.phpp?id=475&postid=6396</a> or <a href="https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5052&postid=39072" rel="nofollow">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5052&postid=39072</a>
]]></description>
        <pubDate>Fri, 07 Jun 2019 14:24:45 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[new exes for SixTrack 5.02.05]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5051</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5051</guid>
        <description><![CDATA[
Dear volunteers,<br />
<br />
we are pleased to announce the release to production (SixTrack app) of new exes for the current pro version (v5.02.05). We have new exes for FreeBSD (avx/sse2), an exe for XP hosts (32bits), an aarch64 executable for Linux, and one for Android. Many thanks to James, Kyrre and Veronica for finding the time to produce them.<br />
<br />
Distributing an exe compatible with XP hosts is not a way to encourage people to stay with unsupported OSs, but rather a trial to have a smooth transition to more recent OSs. In this way, people with XP hosts do not miss the possibility to contribute to the present wave of SixTrack tasks (expected to be quite long) while considering options for upgrading their hosts. At the same time, we are looking into preparing 32bits Linux exes. It should be noted that all Win exes are distributed without targeting specific kernel versions - hence, XP hosts may receive tasks with regular Windows exes immediately failing, but the BOINC server should quickly learn that the XP-compatible exe is the appropriate one.<br />
<br />
We are also very happy to start involving freeBSD and Android users in our production chain. For the latter platform, the present exe won't run on Android versions >=8 - James is still looking into this. Since the android version filtering needs a fix on the scheduler side:<br />
<a href="https://github.com/BOINC/boinc/issues/3172" rel="nofollow">https://github.com/BOINC/boinc/issues/3172</a><br />
we labelled the Android exe as beta. Hence, Sixtrack beta users with Android 8 and later should not request tasks for that host or untick the test applications flag in their LHC@home project preferences.<br />
<br />
We are pursuing also the generation of MacOS exes, and we should test them soon on sixtracktest.<br />
<br />
Thanks for your continuous support and help,<br />
Alessio, for the SixTrack team
]]></description>
        <pubDate>Tue, 04 Jun 2019 10:17:02 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[2019 BOINC Pentathlon is over - a big thank you from the SixTrack team!]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5036</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5036</guid>
        <description><![CDATA[
Dear volunteers,<br />
<br />
the 2019 pentathlon is over, and we would like to thank all the participants for having crunched our tasks! We saw the BOINC CPU capacity almost doubled, boosting our calculations, even though it was only for few days. We are very grateful for that!<br />
<br />
The SixTrack team would also like to thank all you volunteers who regularly support us with your CPUs. You give us the possibility to deepen our understanding of the dynamic aperture, a quantity of paramount importance for the stability of particle beams in big research accelerators like superconducting colliders  - last but not least a very recent paper on the most important journal in the field of accelerator physics, comparing simulations and measurements:<br />
<a href="https://journals.aps.org/prab/pdf/10.1103/PhysRevAccelBeams.22.034002" rel="nofollow">https://journals.aps.org/prab/pdf/10.1103/PhysRevAccelBeams.22.034002</a><br />
where simulation results have been obtained thanks to you and BOINC!<br />
<br />
A lot has been already done with your help, but a lot more has still to come in the next future. We count on your support!<br />
Keep up the good work,<br />
Alessio and Massimo, for the SixTrack team
]]></description>
        <pubDate>Mon, 20 May 2019 08:00:16 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[BOINC Pentathlon  - Sixtrack sprint]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5029</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5029</guid>
        <description><![CDATA[
We are very grateful to have been chosen for the <a href="http://www.setigermany.de/boinc_pentathlon/" rel="nofollow">BOINC Pentathlon of SETI Germany</a> over the next days.  For this, the Sixtrack team has submitted a huge backlog of jobs, and our servers will primarily distribute Sixtrack tasks over the next days.   There will only be drip-feed of other applications for now until our backlog is reduced. For fans of other applications, stay tuned or run Sixtrack for a few days.
]]></description>
        <pubDate>Wed, 15 May 2019 14:30:26 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CMS -- Please set "no new tasks"]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5028</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5028</guid>
        <description><![CDATA[
Hi to all CMS-ers.  We need to drain the job queue so that a new version of the WMAgent can be installed.<br />
Can you please set No New Tasks so that your current tasks can run out and no new jobs start?  If you have any tasks waiting to run, please suspend or abort them.<br />
Thanks, I'll let you know as soon as the change is done.
]]></description>
        <pubDate>Tue, 14 May 2019 14:40:22 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Database problems]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5026</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5026</guid>
        <description><![CDATA[
We are having database problems and have to schedule an intervention at 3:30pm UTC.  The LHC@home servers are back again. We may have some irregular dispatching of some applications over the next hours.
]]></description>
        <pubDate>Mon, 13 May 2019 13:01:57 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Native Theory Application (TheoryN) Released]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5023</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5023</guid>
        <description><![CDATA[
The Native Theory Application for Linux has moved out of Beta status and is now generally available. It is similar to the Native ATLAS application in that it requires CVMFS to be installed locally but does not require Singulariy as it uses Linux Containers (runc). To setup your machine for this application please follow the<a href="https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4971&postid=38259#38259" rel="nofollow"> instructions.</a>. Even if the Native ATLAS tasks are running successfully, follow the instructions to ensure that CVMFS is configured correctly for both and that Linux Containers are enabled. This is a new application (TheoryN) rather than an alternative version of the Theory application as they have different resources requirement. If there are any issues, please post them to the Theory messages board.
]]></description>
        <pubDate>Mon, 13 May 2019 08:07:13 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[new SixTrack version 5.02.05 released on BOINC for production]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5011</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5011</guid>
        <description><![CDATA[
Dear volunteers,<br />
<br />
after a long period of development and testing, we are pleased to announce that we have on BOINC a new major release of SixTrack. The development team made an impressive job to re-factorise the code, porting arrays to dynamic memory allocation, splitting the source code (gathered in few, huge source files) into fortran90 modules, making maintenance easier and deleting a lot of duplicated code and massive arrays - without mentioning countless bug fixes, documentation updates, re-written input parsing, improved build system and test suite.<br />
<br />
We have also implemented plenty of new features. Most of them are still available only on the batch system as CERN (e.g. linking to Geant4 or Pythia, running coupled to FLUKA or other external codes, support for ROOT and HDF5), but many of them can be already deployed by BOINC jobs, like on-line aperture checking, electron lenses, generalised RF-multipoles, quadrupole fringe fields, and hashing of files for checks. All these new features will allow us to study new machine configurations and refine results, and we count on your help!<br />
<br />
Thanks again for your support, and keep up the good work!<br />
<br />
Alessio, for the SixTrack Team
]]></description>
        <pubDate>Fri, 03 May 2019 16:01:25 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Problem writing CMS job results; please avoid CMS tasks until we find the reason]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4998</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4998</guid>
        <description><![CDATA[
Since some time last night CMS jobs appear to have problems writing results to CERN storage (DataBridge).  It's not affecting BOINC tasks as far as I can see, they keep running and credit is given.  However, Dashboard does see the jobs as failing, hence the large red areas on the job plots.<br />
Until we find out where the problem lies, it's best to set No New Tasks or otherwise avoid CMS jobs.  I'll let you know when things are back to normal again.
]]></description>
        <pubDate>Thu, 18 Apr 2019 15:44:45 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CMS jobs]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4978</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4978</guid>
        <description><![CDATA[
The batch I submitted last night is now showing on the monitor, so you can resume tasks at will.
]]></description>
        <pubDate>Sat, 23 Mar 2019 17:56:25 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Warning: possible shortage of CMS jobs - set No New Tasks as a precaution]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4977</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4977</guid>
        <description><![CDATA[
There was an intervention (i.e. upgrade) yesterday afternoon[1] on the cmsweb-testbed system we use to submit CMS workflows that left things a bit confused.  One problem was fixed, and the monitor shows all good.  However, we are running out of CMS jobs -- maybe 10 hours left -- but the new batch I submitted yesterday isn't showing up on the testbed monitor.  I submitted another last night but still neither are being shown this morning, so I submitted yet another batch.<br />
At the moment I don't know whether the submission has failed or whether the monitor hasn't picked up the new batches.  As a precaution, set No New Tasks on your CMS project(s) to avoid tasks crashing due to lack of jobs.  I'll let you know as soon as I'm sure jobs are available again.<br />
<br />
[1] How many times do I have to tell people not to touch critical systems on a Friday -- especially Friday afternoon!?
]]></description>
        <pubDate>Sat, 23 Mar 2019 11:31:44 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CERN Open Days 2019]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4957</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4957</guid>
        <description><![CDATA[
<a href="https://home.cern/news/news/cern/cern-open-days-explore-future-us" rel="nofollow">CERN Open Days 2019</a>
]]></description>
        <pubDate>Wed, 20 Feb 2019 13:48:39 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[BOINC Open Source Project Looking for Experienced Macintosh Developers]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4951</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4951</guid>
        <description><![CDATA[
The Berkeley Open Infrastructure for Network Computing (BOINC) system is the software infrastructure used by LHC@home and many other volunteer distributed computing projects. The BOINC Open Source Project is looking for volunteers to develop and maintain the BOINC client on Macintosh. The BOINC Client and Manager are C++ cross-platform code supporting MS Windows, Mac, Linux, and several other operating systems. We currently have a number of volunteer developers supporting Windows and Linux, but our main Mac developer is winding down his involvement after many years. He is prepared to help a few new Mac developers get up to speed.<br />
<br />
If you have Mac development experience and are interested in volunteering time to help support and maintain the BOINC Mac client please have a look at the more detailed description here: â€‹<a href="https://boinc.berkeley.edu/trac/wiki/MacDeveloper" rel="nofollow">https://boinc.berkeley.edu/trac/wiki/MacDeveloper</a><br />
<br />
If you are not a Mac developer, but have other skills and are interested in contributing to BOINC, the link above also has more general information.
]]></description>
        <pubDate>Thu, 14 Feb 2019 07:54:10 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Consent required to export statistics]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4918</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4918</guid>
        <description><![CDATA[
Following the implementation of GDPR compliance with BOINC, user consent is now required to export BOINC statistics from LHC@home to BOINC statistics sites, such as <a href="https://boincstats.com/" rel="nofollow">BOINC stats</a>.<br />
<br />
To grant your consent, please login to the LHC@home site and update your project preferences.  Once logged on to the LHC@home site, please navigate to the <a href="https://lhcathome.cern.ch/lhcathome/prefs.php?subset=project" rel="nofollow">Project Preferences page.</a><br />
<br />
Click on "Edit preferences" and then tick the box on the line:<br />
"Do you consent to exporting your data to BOINC statistics aggregation Web sites?"   <br />
<br />
This will enable continued export of statistics from LHC@home for your BOINC user account.  If you leave the box unchecked, statistics should no longer be exported.<br />
<br />
Thanks for your contributions to LHC@home!
]]></description>
        <pubDate>Wed, 09 Jan 2019 16:10:34 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Seasons greetings from LHC@home]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4909</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4909</guid>
        <description><![CDATA[
Dear volunteers,<br />
<br />
We in the  <a href="http://greetings.web.cern.ch/e-card/5b1c8084ab2cb5256cb01d96dc37d9cd" rel="nofollow">LHC@home team wish you all a Merry Christmas and Happy New Year!</a> <br />
<br />
Our warm thanks to all of you for your contributions to LHC@home!
]]></description>
        <pubDate>Fri, 21 Dec 2018 14:21:41 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Server upgrade]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4901</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4901</guid>
        <description><![CDATA[
The LHC@home BOINC servers will be upgraded to the <a href="https://github.com/BOINC/boinc/releases" rel="nofollow">latest BOINC server release</a> Tuesday morning at 8AM GMT. BOINC services like upload/download and task validation and assimilation will be paused for about 1 hour during the intervention to update our servers.
]]></description>
        <pubDate>Mon, 10 Dec 2018 15:43:39 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Pausing submission of LHCb Applications]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4884</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4884</guid>
        <description><![CDATA[
Dear BOINC Volunteers, <br />
<br />
LHCb has been very grateful to the BOINC community in the past years for their support and provisioning of computing resources to run LHCb simulation jobs. Since the start of the service for LHCb you have provided computing resources that allowed us to execute a fantastic amount of 3.1 Million successful jobs which simulated 142'740'087 events. This work considerably contributed to the work of the experiment. Many thanks to you all !!!<br />
<br />
Despite this success we have also observed that the work in connection to BOINC operations has grown in the past within the LHCb computing project and after internal discussions we have decided to pause the operations of the service and therefore not to run LHCb applications via BOINC for the time being with the possibility to re-open the service in the future. <br />
<br />
Please note that the possibility to contribute computing resources to other BOINC projects stays untouched by this decision and we would like to encourage you to continue supporting also the other projects represented via the LHC@home BOINC service. <br />
<br />
For now I would like to re-state my thanks to you, the BOINC community, for your support. <br />
<br />
Best regards<br />
<br />
Dr. Stefan Roiser<br />
LHCb Computing Project Leader<br />
stefan.roiser@cern.ch
]]></description>
        <pubDate>Mon, 19 Nov 2018 15:26:59 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[SixTrack news]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4854</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4854</guid>
        <description><![CDATA[
Dear Volunteers,<br />
<br />
In spite of the break and lack of simulation work things are moving behind the scenes! Most of the trackers have been busy with the preparation of and attendance to the HiLumi annual collaboration meeting. For instance:<br />
* new scanning parameters for DA studies, to shed some light on open points concerning the different behavior of the two beams in the LHC:<br />
<a href="https://indico.cern.ch/event/743633/contributions/3071974/attachments/1695257/2728719/VanderVeken_phase_dp.pdf" rel="nofollow">https://indico.cern.ch/event/743633/contributions/3071974/attachments/1695257/2728719/VanderVeken_phase_dp.pdf</a><br />
* an update of DA results to the latest developments on HL-LHC optics:<br />
<a href="https://indico.cern.ch/event/742082/contributions/3085158/attachments/1736226/2808309/nkarast_HLCollab_18102018.pdf" rel="nofollow">https://indico.cern.ch/event/742082/contributions/3085158/attachments/1736226/2808309/nkarast_HLCollab_18102018.pdf</a><br />
 <br />
The collaboration meeting is the most important event of the large collaboration, led by CERN, that is designing and building the High-Luminosity upgrade of the LHC. This is not only a forum to present and discuss recent results, but also an event that inspires new ideas and studies. Therefore, we would like to announce that in few weeks we will be back to you, counting on your usual fantastic and essential support, to launch new simulation campaigns!<br />
 <br />
                                                Stay tuned!<br />
<br />
Alessio and Massimo, for the SixTrack team
]]></description>
        <pubDate>Fri, 19 Oct 2018 08:17:24 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Unexpected server downtime]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4845</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4845</guid>
        <description><![CDATA[
Due to a failure on part of our computing infrastructure that also prevented our fail-over mechanism to work, the LHC@home web server was unavailable until this morning. Sorry for this, and thanks for your contributions to our project.
]]></description>
        <pubDate>Thu, 04 Oct 2018 08:09:13 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[test of SixTrack 5.00.00]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4744</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4744</guid>
        <description><![CDATA[
Dear all,<br />
we are in the process of testing a new sixtrack version, i.e. 5.00.00. This is a true upgrade of the code, which has been re-factored deeply - including dynamic memory allocation. Moreover, it provides fixes to the physics already implemented, e.g. solenoidal fields and online aperture checking, and brand new implementations, e.g. electron lenses and ion tracking. We are finalising the implementations, hence the version running as sixtracktest is a quick test of the main functionalities and code re-factoring.<br />
More to come in the next days / weeks.<br />
Thanks a lot for your precious help!<br />
Keep up the good work, and happy crunching!<br />
Alessio, for the SixTrack team
]]></description>
        <pubDate>Wed, 27 Jun 2018 07:13:28 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CMS production pause]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4688</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4688</guid>
        <description><![CDATA[
We have run into a problem with the CMS project -- the merged result files processed at CERN are failing to be written to central storage.  Consequently I have decided not to submit any more jobs until the experts have clarified what the problem is.  The CMS jobs queue is about to start draining and I expect it to be empty of volunteer jobs within a few hours (there may still be post-production jobs, but these run at CERN, not on your machines).  I suggest you set No New Tasks or transfer to another project until the situation is resolved.
]]></description>
        <pubDate>Mon, 23 Apr 2018 15:50:52 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CERN network problem]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4665</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4665</guid>
        <description><![CDATA[
There was a <a href="https://cern.service-now.com/service-portal/view-outage.do?n=OTG0043238" rel="nofollow">major network problem</a> at CERN this morning.  It has apparently been resolved but not yet understood, according to the above link.
]]></description>
        <pubDate>Thu, 05 Apr 2018 09:44:31 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Server upgrade - file uploads paused]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4651</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4651</guid>
        <description><![CDATA[
We will change the storage back-end on our BOINC servers today, and the file servers will be disabled during the operation.   <br />
<br />
Hence your BOINC clients will get not be able to upload or download files from LHC@home today for a few hours. Once our maintenance operation is finished, BOINC clients will be able to upload again.<br />
<br />
Thanks for your understanding and happy crunching!
]]></description>
        <pubDate>Mon, 26 Mar 2018 06:23:52 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Theory application reaches 4 TRILLION events today !!]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4638</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4638</guid>
        <description><![CDATA[
LHC@home's <b>Theory application</b> today passed the milestone of <b>4 TRILLION simulated events</b>. This project, under its earlier name "Test4Theory", began production in 2011 and was the first BOINC project to use Virtual Machine technology (based on CERN's CernVM system).<br />
<br />
We will be publishing some more details for you on the LHC@home and CERN websites over the coming days. Here is a first release:<br />
<br />
<a href="http://lhcathome.web.cern.ch/articles/test4theory/test4theory-tops-4-trillion-events" rel="nofollow">http://lhcathome.web.cern.ch/articles/test4theory/test4theory-tops-4-trillion-events</a><br />
<br />
Many thanks to all our volunteers for enabling this achievement !
]]></description>
        <pubDate>Wed, 14 Mar 2018 05:37:23 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CMS Job queue draining]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4624</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4624</guid>
        <description><![CDATA[
Due to a problem with the WMAgent submission task, a new batch of CMS jobs is not being put in the Condor queue.  So, the queue is now draining and there will be no more jobs available in a couple of hours.  Best to set your BOINC instance to No New Tasks if you can, to avoid spurious compute error terminations.
]]></description>
        <pubDate>Thu, 22 Feb 2018 22:10:56 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Task creation delayed - database maintenance]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4606</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4606</guid>
        <description><![CDATA[
Due to a database issue last week, task generation is delayed and we need to clean up stuck workunits.  The project daemons will be on and off this morning while we try to debug a problem with the BOINC transitioner.
]]></description>
        <pubDate>Mon, 05 Feb 2018 08:16:44 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Thanks for supporting SixTrack at LHC@Home and updates]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4590</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4590</guid>
        <description><![CDATA[
Dear volunteers,<br />
<br />
All members of the SixTrack team would like to thank each of you for supporting our project at LHC@Home. The last weeks saw a significant increase in work load, and your constant help did not pause even during the Christmas holidays, which is something that we really appreciate!<br />
<br />
As you know, we are interested in simulating the dynamics of the beam in ultra-relativistic storage rings, like the LHC. As in other fields of physics, the dynamics is complex, and it can be decomposed into a linear and a non-linear part. The former allows the expected performance of the machine to be at reach, whereas the latter might dramatically affect the stability of the circulating beam. While the former can be analysed with the computing power of a laptop, the latter requires BOINC, and hence you! In fact, we perform very large scans of parameter spaces to see how non-linearities affect the motion of beam particles in different regions of the beam phase space and for different values of key machine parameters. Our main observable is the dynamic aperture (DA), i.e. the boundary between stable, i.e. bounded, and unstable, i.e., unbounded, motion of particles.<br />
<br />
The studies mainly target the LHC and its upgrade in luminosity, the so-called HL-LHC. Thanks to this new accelerator, by ~2035, the LHC will be able to deliver to experiments x10 more data than what is foreseen in the first 10/15y of operation of LHC in a comparable time. We are in full swing in designing the upgraded machine, and the present operation of the LHC is a unique occasion to benchmark our models and simulation results. The deep knowledge of the DA of the LHC is essential to properly tune the working point of the HL-LHC.<br />
<br />
If you have crunched simulations named "workspace1_hl13_collision_scan_*" (Frederik), then you have helped us in mapping the effects of unavoidable magnetic errors expected from the new hardware of the HL-LHC on dynamic aperture, and identify the best working point of the machine and correction strategies. Tasks named like "w2_hllhc10_sqz700_Qinj_chr20_w2*" (Yuri) focus the attention onto the magnets responsible for squeezing the beams before colliding them; due to their prominent role, these magnets, very few in number, have such a big impact on the non-linear dynamics that the knobs controlling the linear part of the machine can offer relevant remedial strategies.<br />
<br />
Many recent tasks are aimed at relating the beam lifetime to the dynamic aperture. The beam lifetime is a measured quantity that tells us how long the beams are going to stay in the machine, based on the current rate of losses. A theoretical model relating beam lifetime and dynamic aperture was developed; a large simulation campaign has started, to benchmark the model against plenty of measurements taken with the LHC in the past three years. One set of studies, named "w16_ats2017_b2_qp_0_ats2017_b2_QP_0_IOCT_0" (Pascal), considers as main source of non-linearities the unavoidable multipolar errors of the magnets, whereas tasks named as "LHC_2015*" (Javier) take into account the parasitic encounters nearby the collision points, i.e. the so called "long-range beam-beam effects".<br />
<br />
One of our users (Ewen) is carrying out two studies thanks to your help. In 2017 DA was directly measured for the first time in the LHC at top energy, and nonlinear magnets on either side of ATLAS and CMS experiments were used to vary the DA. He wants to see how well the simulated DA compares to these measurements. The second study seeks to look systematically at how the time dependence of DA in simulation depends on the strength of linear transverse coupling, and the way it is generated in the machine. In fact, some previous simulations and measurements at injection energy have indicated that linear coupling between the horizontal and vertical planes can have a large impact on how the dynamic aperture evolves over time.<br />
<br />
In all this, your help is fundamental, since you let us carry out the simulations and studies we are interested in, running the tasks we submit to BOINC. Hence, the warmest "thank you" to you all!<br />
Happy crunching to everyone, and stay tuned!<br />
<br />
Alessio and Massimo, for the LHC SixTrack team.
]]></description>
        <pubDate>Tue, 23 Jan 2018 17:08:14 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[LHC@home down-time due to system updates]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4589</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4589</guid>
        <description><![CDATA[
Tomorrow Wednesday 24/1, the LHC@home servers will be unavailable for a short period while our storage backend is taken down for a system update.<br />
<br />
Today, Tuesday 23/1, some of the Condor servers that handle CMS, LHCb and Theory tasks will be down for a while.   Regarding the on-going issues with upload of files,  please refer to <a href="https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4567" rel="nofollow">this thread.</a><br />
<br />
Thanks for your understanding and happy crunching!
]]></description>
        <pubDate>Tue, 23 Jan 2018 09:19:32 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Short interruptions Tuesday]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4579</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4579</guid>
        <description><![CDATA[
There will be a couple of short server outages while our BOINC service pass to fail-over nodes today, Tuesday 16th of January.   Similar interruptions will happen next week, as we carry out security updates on our computing infrastructure.
]]></description>
        <pubDate>Tue, 16 Jan 2018 07:05:48 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[File upload issues]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4567</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4567</guid>
        <description><![CDATA[
Our NFS storage backend got saturated and hence uploads are failing intermittently.<br />
<br />
The underlying cause is an issue with file deletion, we are trying to resolve that.<br />
<br />
Sorry for the trouble and thanks for your patience with transfers to LHC@home.
]]></description>
        <pubDate>Mon, 08 Jan 2018 09:25:05 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Increased file server capacity]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4539</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4539</guid>
        <description><![CDATA[
Since Tuesday evening, we have had intermittent issues with upload failures due to a combination of a large number of new hosts running BOINC that co-incidentally joined at the same time as larger ATLAS tasks had been introduced. Our file server capacity has been increased and backlog tasks waiting for upload should upload again soon. (Please refer to the <a href="https://lhcathome.cern.ch/lhcathome/forum_forum.php?id=93" rel="nofollow">ATLAS application</a> and <a href="https://lhcathome.cern.ch/lhcathome/forum_forum.php?id=3" rel="nofollow">Number crunching</a> forums for more details.)
]]></description>
        <pubDate>Thu, 14 Dec 2017 10:05:32 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Missing accounts]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4537</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4537</guid>
        <description><![CDATA[
Due to a too aggressive spam-cleaning campaign (our fault), we have accidentally deleted some valid accounts on Monday.  We have restored a backup copy of the BOINC database, and will recover the missing account data.<br />
<br />
If you get a message "missing account key" from your BOINC client, you may be affected. We expect that we can fix this later today, once we have verified the data sets. Hence there is no need to register again.<br />
<br />
My apologies for this mishap.
]]></description>
        <pubDate>Wed, 13 Dec 2017 08:46:45 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[BOINC server update Thursday]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4529</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4529</guid>
        <description><![CDATA[
We will upgrade the LHC@home web servers to the new BOINC server code with the "Bootstrap" theme on Thursday 7 December. The new style and layout can already be seen on the <a href="https://lhcathomedev.cern.ch/lhcathome-dev/" rel="nofollow">LHC@home development project.</a>.<br />
<br />
During the intervention, from 09 UTC on Thursday, there may be intermittent availability of the LHC@home servers, so BOINC clients may back off and try to upload data later.
]]></description>
        <pubDate>Tue, 05 Dec 2017 09:07:45 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Phaseout of legacy site: lhcathomeclassic.cern.ch/sixtrack]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4455</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4455</guid>
        <description><![CDATA[
LHC@home has been consolidated and uses SSL for communication <a href="https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4002&postid=27816#27816" rel="nofollow">as mentioned in this thread</a> last year.<br />
<br />
Some BOINC clients are still connecting to the old <i>lhcathomeclassic.cern.ch/sixtrack</i> address, that will be phased out soon.<br />
<br />
If this is the case for you, please re-attach the project to the current LHC@home URL presented in the BOINC manager. (http://lhcathome.cern.ch will redirect your BOINC client to <a href="https://lhcathome.cern.ch/lhcathome" rel="nofollow">https://lhcathome.cern.ch/lhcathome</a> )<br />
<br />
For those who are still running an old BOINC 6 client, please upgrade to BOINC 7.2 or later. (The current BOINC client releases are 7.6.33 or 7.8.)<br />
<br />
Many thanks for your contributions to LHC@home!
]]></description>
        <pubDate>Fri, 29 Sep 2017 13:50:32 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CMS jobs unavailable Weds 27th September]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4452</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4452</guid>
        <description><![CDATA[
An upgrade to the CMS@Home workflow management system (WMAgent) is planned for tomorrow (Wed Sep 27th).  This needs the current batch of jobs to be stopped so that the queue is empty.  I plan to do this about 0700-0800 UTC on Wednesday.<br />
To avoid "error while computing" task failures and the resulting back-off of your daily quotas, we suggest you set all your CMS machines to No New Tasks at least 12 hours beforehand to allow current tasks to time out in the normal way.  You can stop BOINC once all your tasks are finished, if you wish.<br />
Exactly how long the intervention will take is unclear, and there will be a delay of up to an hour to get a new batch of jobs queued afterwards.  I will post here when jobs are available again, hopefully before the end of the day European time.
]]></description>
        <pubDate>Tue, 26 Sep 2017 12:23:21 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Possible systems failures]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4444</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4444</guid>
        <description><![CDATA[
We seem to be in the early stages of a system failure for several sub-projects.  The <a href="http://wlcg-squid-monitor.cern.ch/snmpstats/mrtgall/CERN-PROD_lhchomeproxy.cern.ch_0/index.html" rel="nofollow">proxy server</a> has flatlined and my running jobs monitor is dipping alarmingly.  Please check if you are getting tasks flagged as computing failures, and set No New Tasks if so.<br />
<font color="blue">[Edit] On closer inspection, it may just be the CMS app. [/Edit]</font><br />
Obviously I'll apologise if this is a false alarm, but it's the wrong time of day to expect a prompt response from the CERN admins.
]]></description>
        <pubDate>Mon, 18 Sep 2017 22:00:29 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[New SixTrack exes]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4424</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4424</guid>
        <description><![CDATA[
Dear all,<br />
<br />
After testing them as the sixtracktest app, we have just pushed out executables to the sixtrack app. For the moment, we have exes only targeted for the main OSs, i.e. Windows, Linux, and the brand new one for MacOS. We are still finalising the definition of the plan classes with the sixtracktest app for targeting Android, freeBSD, and Arm CPUs - e.g. see<br />
<a href="https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4296&postid=32169#32169" rel="nofollow">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4296&postid=32169#32169</a><br />
<br />
Thanks a lot for your contribution and ... happy crunching!<br />
Alessio, for the sixtrack team
]]></description>
        <pubDate>Tue, 05 Sep 2017 16:40:44 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Deadline change for ATLAS jobs]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4398</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4398</guid>
        <description><![CDATA[
Due to the tight deadline of the ATLAS tasks, we change to deadline of ATLAS jobs from 2 weeks to 1 week. The ATLAS job takes about 3-4 CPU hours to finish on a moderate CPU (2.5GFLOPS).
]]></description>
        <pubDate>Wed, 16 Aug 2017 08:42:03 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[New ATLAS app version released for Linux hosts]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4395</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4395</guid>
        <description><![CDATA[
We released a new version of the ATLAS app today, 2.41 for the x86_64-pc-linux-gnu platform. <br />
The new features of this version include:<br />
1. It requires the host OS to be either<b><font color="red"> Scentific Linux 6 or Cent OS</font> 7</b>. <br />
2. It require  <a href="http://cernvm.cern.ch/portal/filesystem" rel="nofollow">CVMFS </a>and <a href="http://singularity.lbl.gov/" rel="nofollow">Singularity</a> instead of Virtualbox to run the ATLAS jobs.<br />
3. It is more efficient, as the avoidance of using Virtualbox.<br />
Currently, this version is set to beta version. <br />
<br />
For people who want to try it out,we provide a script to install everything including CVMFS, singularity<a href="http://atlasathome.cern.ch/boinc_conf/install_cvmfs_sin.sh" rel="nofollow"> here</a>, <br />
<br />
<br />
Try it if you are interested!
]]></description>
        <pubDate>Thu, 10 Aug 2017 11:04:25 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CMS Weekend problem]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4393</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4393</guid>
        <description><![CDATA[
Warning: The WMAgent which controls CMS jobs appears to have a failed component very recently. Queue seems to be exhausted. Please set No New Tasks or change to a backup app while I try to raise someone at CERN to fix it. This could be a problem given that this is expected to be the heaviest weekend of the year for holiday travel in Europe...
]]></description>
        <pubDate>Sat, 05 Aug 2017 04:31:20 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Optimising distribution of SixTrack tasks]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4383</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4383</guid>
        <description><![CDATA[
Dear Volunteers,<br />
<br />
we are trying to improve the distribution of <b>SixTrack</b> tasks. If your host could process more tasks but during the project update you don't receive any, can you let us know and send us your client logging report? Please continue the thread "SixTrack Tasks NOT being distributed" opened by Eric here:<br />
<a href="http://lhcathome.cern.ch/sixtrack/forum_thread.php?id=4324" rel="nofollow">http://lhcathome.cern.ch/sixtrack/forum_thread.php?id=4324</a><br />
so that we can collect all the issues in only one place. In this way, we could try to better tune parameters controlling the distribution of tasks on the server side.<br />
<br />
At the same time, we apologize for the loss of credits following the accidental deletion of lines in the main DB - please see message:<br />
<a href="http://lhcathome.cern.ch/sixtrack/forum_thread.php?id=4362&postid=31563#31563" rel="nofollow">http://lhcathome.cern.ch/sixtrack/forum_thread.php?id=4362&postid=31563#31563</a><br />
As you can see, task distribution is progressing regularly since the beginning of the week<br />
<br />
Thanks in advance for your precious cooperation,<br />
Alessio and Riccardo, for the SixTrack team
]]></description>
        <pubDate>Thu, 27 Jul 2017 12:44:17 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Aborted Work Units]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4375</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4375</guid>
        <description><![CDATA[
After deleting many really old results from 2013 until March 2017 (was meant to<br />
be  December 2016) it seems many Tasks have been aborted. A full analysis<br />
and report will be posted. No action required by volunteers.    Eric.
]]></description>
        <pubDate>Mon, 24 Jul 2017 12:26:54 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CMS Jobs working again]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4365</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4365</guid>
        <description><![CDATA[
It's been a few hours now since the Data Bridge appears to have been fixed and jobs are staging out normally.  You can resume running CMS tasks at your will.
]]></description>
        <pubDate>Tue, 18 Jul 2017 17:16:35 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CMS@Home -- please set No New Tasks and perhaps temporarily run another project]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4364</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4364</guid>
        <description><![CDATA[
There is a problem staging-out CMS@Home jobs to the Data Bridge.  Until we find the cause, please set your CMS crunchers to No New Tasks, or temporarily move them to another app or project.<br />
Sorry for the trouble, unfortunately it's beyond my capability to resolve.
]]></description>
        <pubDate>Sun, 16 Jul 2017 21:37:26 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[No RESULTS accepted from Linux Kernel 4.8.*]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4362</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4362</guid>
        <description><![CDATA[
As an emergency measure and over the weekend, I have set<br />
max_results_day to -1 for all hosts running Linux (Ubuntu?) <br />
Kernel 4.8.*. SixTrack is consistently crashing with an IFORT run<br />
time formatted I/O error. This will avoid wasting your valuable <br />
contributions.    Eric.
]]></description>
        <pubDate>Fri, 14 Jul 2017 13:50:37 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[IMPORTANT, pull back on SixTrack Inconclusive Results]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4341</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4341</guid>
        <description><![CDATA[
Please see Message 31102 on SixTrack Application, <br />
Inconclusive Results, keyword IMPORTANT.  Eric.
]]></description>
        <pubDate>Mon, 26 Jun 2017 18:08:03 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CMS application job queue is being run down.]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4340</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4340</guid>
        <description><![CDATA[
We want to update the WMAgent job controller, so I've stopped the next batch (I hope).  We should run out of jobs in 10-12 hours, so set any machine running CMS tasks to No New Tasks as soon as practicable.  Should be up again tomorrow.
]]></description>
        <pubDate>Mon, 26 Jun 2017 15:59:30 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[SixTrack Inconclusive Results]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4337</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4337</guid>
        <description><![CDATA[
Please see the SixTrack Application threads for an important update,<br />
Message 31064, Keyword BANNED
]]></description>
        <pubDate>Mon, 26 Jun 2017 06:24:06 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[sixtrack_validator]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4335</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4335</guid>
        <description><![CDATA[
There will be a (very) short interruption while I<br />
install a new sixtrack_validator. Should fix null/empty<br />
fort.10 and the nasty "outlier" problem.<br />
See SixTrack Application, sixtrack_validator for more news and details.
]]></description>
        <pubDate>Sat, 24 Jun 2017 09:21:04 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[SixTrack Tasks distribution issues]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4325</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4325</guid>
        <description><![CDATA[
Please see Message boards:SixTrack application, thread<br />
"SixTrack Tasks NOT being distributed". This is to have one place for<br />
all relevant messages. This thread is for SixTrack only.<br />
My first post reports my personal status.
]]></description>
        <pubDate>Tue, 20 Jun 2017 12:43:01 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Network and server problems Sunday night]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4319</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4319</guid>
        <description><![CDATA[
We had a network problem in the computer centre at CERN last night, leading to a number of issues for our servers.  BOINC servers should be back in business now. <br />
<br />
Normally tasks should be correctly uploaded again on the next attempt. If you see any issues, please try an update or reset of the project.<br />
<br />
Sorry for the trouble, and happy crunching!
]]></description>
        <pubDate>Mon, 19 Jun 2017 07:19:47 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[SixTrack News - May 2017]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4288</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4288</guid>
        <description><![CDATA[
The SixTrack team would like to thank all the teams who took part in the 2017 pentathlon hosted by SETI.Germany:<br />
<a href="https://www.seti-germany.de/boinc_pentathlon/" rel="nofollow">https://www.seti-germany.de/boinc_pentathlon/</a><br />
where LHC@Home was chosen for the swimming discipline. The pentathlon gave us the possibility of carrying out a vast simulation campaign, with lots of new results generated that we are now analysing. While the LHC experiments send volunteers tasks where data collected by the LHC detectors has to be analysed or Monte Carlo codes for data generation, SixTrack work units probe the dynamics of LHC beams; hence, your computers are running a live model of the LHC in order to explore its potential without actually using real LHC machine time, precious to physics.<br />
<br />
Your contribution to our analyses is essential. For instance, we reached <b>~2.5 MWUs</b> processed in total, with a peak slightly above <b>400kWUs</b> processed at the same time, and <b>>50TFLOPs</b>, during the entire two weeks of the pentathlon. The pentathlon was also the occasion to verify recent improvements to our software infrastructure. After this valuable experience, we are now concentrating our energies on <b>updating the executables</b> with brand new functionality, extending the range of studies and of supported systems. This implies an even <b>increased dependence on your valuable support</b>.<br />
<br />
Thanks a lot to all people involved! We count on your help and committment to science and to LHC@home to pursue the new challenges of beam dynamics which lie ahead.
]]></description>
        <pubDate>Fri, 26 May 2017 14:43:25 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[LHCb application is in production]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4241</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4241</guid>
        <description><![CDATA[
We are very happy to announce that the LHCb application is out of beta and is now in production mode on LHC@home. Thank you all for your precious contribution. <br />
<br />
We are grateful to have you all as part of our project. <br />
<br />
Please, refer to the LHCb application forum for any problem or feedback.<br />
<br />
Thanks a lot<br />
Cinzia
]]></description>
        <pubDate>Thu, 27 Apr 2017 07:50:51 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[New file server]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4237</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4237</guid>
        <description><![CDATA[
We have added a new file server for download/upload to scale better with higher load. If there should be errors with download or upload of tasks, please report on the MBs. <br />
<br />
Thanks for contributing to LHC@home!
]]></description>
        <pubDate>Wed, 26 Apr 2017 14:00:48 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[ATLAS application now in production]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4172</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4172</guid>
        <description><![CDATA[
The ATLAS application is now in production here on LHC@home, after a period of testing.  This marks another milestone for the LHC@home consolidation, and we would like to warmly thank all of you who have contributed to help and tests for the migration!<br />
<br />
Please refer to <a href="https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4161" rel="nofollow"> Yeti's checklist</a> for the the ATLAS application and the <a href="https://lhcathome.cern.ch/lhcathome/forum_forum.php?id=93" rel="nofollow">ATLAS application forum</a> if you need help.
]]></description>
        <pubDate>Wed, 22 Mar 2017 15:45:58 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Network interruptions 15th of March]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4155</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4155</guid>
        <description><![CDATA[
Due to a network upgrade in the CERN computer centre, connections to LHC@home servers will intermittently time out tomorrow Wednesday morning between 4 and 7am UTC.  <br />
<br />
BOINC clients will retry later as usual, so this should be mostly transparent.
]]></description>
        <pubDate>Tue, 14 Mar 2017 09:58:26 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[VLHCathome project fully migrated]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4136</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4136</guid>
        <description><![CDATA[
The former vLHCathome project has now been migrated here and the old vLHCathome project site has been redirected.<br />
<br />
The credit has also been migrated as discussed <a href="https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4134" rel="nofollow">in this thread.</a><br />
<br />
If your BOINC client complains about a wrong project URL, please re-attach to this project, LHC@home.<br />
<br />
Thanks again to all who contributed to vLHCathome and to those who contribute here!<br />
<br />
<i>-- The team</i>
]]></description>
        <pubDate>Thu, 02 Mar 2017 08:35:49 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Draining the CMS job queue]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4124</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4124</guid>
        <description><![CDATA[
Because of an upgrade to the WMAgent server, we need to drain the CMS job queue.  So, I'm not submitting any more batches at present and we should start running out over the weekend.  If you see that you are not getting any CMS jobs (not tasks...) please set No New Jobs or stop BOINC.<br />
I expect that the intervention will take place Monday morning, and hopefully we'll have new jobs again later that day.
]]></description>
        <pubDate>Fri, 17 Feb 2017 10:57:18 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Good news for the CMS@Home application]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4110</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4110</guid>
        <description><![CDATA[
This afternoon we demonstrated the final link in the chain of producing Monte Carlo data for CMS using this project (and the -dev project too, of course), namely the transfer of result files from the temporary Data Bridge storage to a CMS Tier 2 site's storage element (SE).  To summarise, the steps are:<br />
<br />
o Creating a configuration script defining the process(es) to be simulated<br />
o Submitting a batch of jobs of duration and result-file size suitable for running by volunteers<br />
o Having those jobs picked up by volunteers running BOINC and the CMS@Home application, and the result files returned to the Data Bridge<br />
o Running "merge" jobs on a small cluster at CERN to collect the smaller files into larger files (~2.2 GB) -- this step has to be done at CERN as most volunteers will not have the bandwidth (or data plan!) to handle the data volumes required.  This step also serves to a large extent as the verification step required to satisfy CMS of the result files' integrity.<br />
o Transferring the merged files into the Grid environment where they are then readily available to CMS researchers around the world<br />
<br />
Thanks, everybody.  From here on it gets more political, but we've been garnering support as the project progressed.  We now need to move into a more "production" environment and convince central powers-that-be to take over the responsibility of submitting suitable workflows and collecting the results.  You will still see some changes in the future, especially as we bring some of the more-advanced features across here from the -dev project.
]]></description>
        <pubDate>Fri, 27 Jan 2017 20:59:36 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[MacOS executable OSX 10.10.5 Yosemite]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4093</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4093</guid>
        <description><![CDATA[
Well I have finally got some work on my Mac with our new MacOS executable<br />
built on OS X 10.10.5 Yosemite .<br />
Please report to me eric.mcintosh@cern.ch, <br />
or to the Topic Sixtrack Application, MacOS executable thread,<br />
if you get some work and there are problems. Eric.
]]></description>
        <pubDate>Thu, 19 Jan 2017 10:41:43 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[VM applications broken by the Windows 10 update KB3206632]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4077</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4077</guid>
        <description><![CDATA[
The Windows 10 update <a href="https://support.microsoft.com/en-us/help/4004227/windows-10-update-kb3206632" rel="nofollow">KB3206632</a> introduces an issue that affects virtualization-based security (VBS) and hence may break VM applications. The issue is fixed in the update <a href="https://support.microsoft.com/en-us/kb/3213522" rel="nofollow">KB3213522</a>. If you are running Windows 10, please ensure that you have applied the KB3213522 update. <br />
<br />
Thanks everyone who contributed the treads on this issue.<br />
<br />
Refs: <br />
<a href="https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4052&postid=28400" rel="nofollow">Missing heartbeat file errors</a> <br />
<a href="https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4076" rel="nofollow">Microsoft KB3206632 from 16/12/15</a>
]]></description>
        <pubDate>Sun, 08 Jan 2017 20:51:44 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Season's Greetings]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4060</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4060</guid>
        <description><![CDATA[
A very Merry Christmas and a Happy New Year to all the LHC@home supporters.<br />
(I shall send some news about our plans for 2017 in the next few days.)<br />
Eric.
]]></description>
        <pubDate>Sun, 25 Dec 2016 08:33:19 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[VM applications]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4013</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4013</guid>
        <description><![CDATA[
Following the Theory simulations added 1 week ago, we have now also deployed the CMS and LHCb applications from the Virtual LHC@home project here on the consolidated, original <a href="https://lhcathome.cern.ch/lhcathome" rel="nofollow">LHC@home</a>.<br />
<br />
Please note that in order to run <b>VM applications</b> in addition to the <b>classic BOINC application Sixtrack</b>, you need to have a 64bit machine with <b>VirtualBox</b> installed and <b>virtualisation extensions</b> (VT-x) enabled. The details are explained on the <b><a href="http://lhcathome.web.cern.ch/join-us" rel="nofollow">join us</a></b> and <b><a href="http://lhcathome.web.cern.ch/faq" rel="nofollow">faq</a></b> pages on the LHC@home web site.<br />
<br />
By default, only the Sixtrack application is enabled in your BOINC project preferences. If you have VirtualBox installed and wish to try VM applications as well, you need to enable other applications in your <a href="https://lhcathome.cern.ch/lhcathome/prefs.php?subset=project" rel="nofollow">LHC@home project preferences</a>.<br />
<br />
Please note that if you <b>run an older PC</b> with Windows XP or similar, it is <i><b>recommended to stay with the default; Sixtrack only</b></i>.<br />
<br />
Thanks for your contributions to LHC@home!<br />
<br />
<i>--The team</i>
]]></description>
        <pubDate>Mon, 21 Nov 2016 10:05:13 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[LHC@home consolidation]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4002</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4002</guid>
        <description><![CDATA[
As part of consolidation of LHC@home, we have setup a new server web front end using SSL for this project. The new URL is:<br />
<br />
<a href="https://lhcathome.cern.ch/lhcathome" rel="nofollow">https://lhcathome.cern.ch/lhcathome</a><br />
<br />
Please feel free to connect to the new site at your convenience. (BOINC 7.2 clients and later supports SSL.)<br />
<br />
The old LHC@home classic site will continue operation as long as required. Currently there are no new Sixtrack tasks in the queue, but soon more applications and work will be available from this project.
]]></description>
        <pubDate>Thu, 06 Oct 2016 10:56:02 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[LHC@Home - SixTrack Project News]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3997</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3997</guid>
        <description><![CDATA[
The members of the <a href="http://lhcathome.web.cern.ch/projects/sixtrack" rel="nofollow">SixTrack project from LHC@Home</a> would like to thank all the volunteers who made their CPUs available to us! <i>Your contribution is precious</i>, as in our studies we need to scan a rather large parameter space in order to find the best working points for our machines, and this would be <i>hard to do without the computing power you all offer to us</i>!<br />
<br />
Since 2012 we have started performing measurements with beam dedicated to probing what we call the â€œ<b>dynamic aperture</b>â€ (DA). This is the region in phase space where particles can move without experiencing a large increase of the amplitude of their motion. For large machines like the LHC this is an <b>essential parameter</b> for granting <b>beam stability</b> and allowing <b>long data taking</b> at the giant LHC detectors. The measurements will be benchmarked against numerical simulations, and this is the point where <i>you play an important role</i>! Currently we are finalising a first simulation campaign and we are in the process of writing up the results in a final document. As a next step we are going to analyse the second half of the measured data, for which a new tracking campaign will be needed. <i>â€¦so, stay tuned</i>!<br />
<br />
<b>Magnets</b> are the main components of an accelerator, and <b>non-linearities</b> in their fields have direct impact on the beam dynamics. The studies we are carrying out <i>with your help</i> are focussed not only on the current operation of the LHC but also on its upgrade, i.e. the High Luminosity LHC (<a href="http://hilumilhc.web.cern.ch/" rel="nofollow">HL-LHC</a>).  The design of the new components of the machine is at its final steps, and it is essential to make sure that the quality of the magnetic fields of the newly built components allow to reach the highly demanding goals of the project. Two aspects are mostly relevant:<br />
<ul style="word-break:break-word;">specifications for <b>field quality of the new magnets</b>. The criterion to assess whether the magnetsâ€™ filed quality is acceptable is based on the computation of the DA, which should larger than a pre-defined lower bound. The various magnet classes are included in the simulations one by one and the impact on DA is evaluated and the expected field quality is varied until the acceptance criterion of the DA is met. </ul><p><br />
<ul style="word-break:break-word;"><b>dynamic aperture</b> under various optics conditions, analysis of <b>non-linear correction system</b>, and <b>optics optimisation</b> are essential steps to determine the field quality goals for the magnet designers, as well as evaluate and optimise the beam performance.</ul><p><br />
The studies involve accelerator physicists from both CERN and SLAC.<br />
<br />
<br />
<br />
<br />
Long story made short, the tracking simulations we perform require significant computer resources, and <i>BOINC is very helpful</i> in carrying out the studies. <i>Thanks a lot for your help!</i><br />
The SixTrack team<br />
<br />
<br />
<br />
<br />
Latest papers:<br />
<br />
R. de Maria, M. Giovannozzi, E. McIntosh (CERN), Y. Cai, Y. Nosochkov, M-H. Wang (SLAC), DYNAMIC APERTURE STUDIES FOR THE LHC HIGH LUMINOSITY LATTICE, <a href="http://inspirehep.net/record/1417371?ln=en" rel="nofollow">Presented at IPAC 2015</a>.<br />
Y. Nosochkov, Y. Cai, M-H. Wang (SLAC), S. Fartoukh, M. Giovannozzi, R. de Maria, E. McIntosh (CERN), SPECIFICATION OF FIELD QUALITY IN THE INTERACTION REGION MAGNETS OF THE HIGH LUMINOSITY LHC BASED ON DYNAMIC APERTURE, <a href="https://inspirehep.net/record/1314176?ln=en" rel="nofollow">Presented at IPAC 2014</a><br />
<br />
Latest talks:<br />
<br />
Y. Nosochkov, Dynamic Aperture and Field Quality, DOE review of LARP, FNAL, USA, July 2016<br />
Y. Nosochkov , Field Quality and Dynamic Aperture Optimization, LARP HiLumi LHC collaboration meeting, SLAC, USA, May 2016<br />
M. Giovannozzi, Field quality update and recent tracking results, HiLumi LHC LARP annual meeting, CERN, October 2015<br />
Y. Nosochkov, Dynamic Aperture for the Operational Scenario Before Collision, LARP HiLumi LHC collaboration meeting, FNAL, USA, May 2015
]]></description>
        <pubDate>Tue, 26 Jul 2016 08:37:55 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Disk Space Exceeded]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3985</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3985</guid>
        <description><![CDATA[
I am sorry we have submitted some "bad" WUs.<br />
They are using too much disk space. <br />
Please delete any WUS with names like <br />
wjt-18-L1-trc......<br />
wjt-15-L1-trc.......<br />
Apologies.
]]></description>
        <pubDate>Wed, 16 Mar 2016 06:15:12 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Server daemons temporarily stopped]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3982</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3982</guid>
        <description><![CDATA[
Due to a problem with an underlying disk server, the BOINC daemons are temporarily shut down until the disk volume is back.
]]></description>
        <pubDate>Sat, 27 Feb 2016 12:36:01 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Short server interruption 9-Feb.]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3980</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3980</guid>
        <description><![CDATA[
Our LHC@home servers will be down for a short while from 8UTC 9-Feb. due to a disk server intervention. (Intervention postponed 1 week.)
]]></description>
        <pubDate>Tue, 02 Feb 2016 08:27:30 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[BOINC Server up]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3974</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3974</guid>
        <description><![CDATA[
The server is back, for the moment at least.<br />
Clearing backlog of results.    Eric.
]]></description>
        <pubDate>Mon, 07 Dec 2015 07:55:32 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Server down.]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3973</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3973</guid>
        <description><![CDATA[
The BOINC server has been stopped temporarily because of<br />
file system problems at CERN. Hopefully to be restarted tomorrow<br />
Monday.      Eric.
]]></description>
        <pubDate>Sun, 06 Dec 2015 09:56:18 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Work/result buffering problem at CERN]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3970</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3970</guid>
        <description><![CDATA[
We have had a BOINC CERN side buffer problem over the weekend.<br />
It is being investigated and hopefully soon corrected.   Eric.
]]></description>
        <pubDate>Mon, 16 Nov 2015 09:49:17 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Another short service interruption]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3968</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3968</guid>
        <description><![CDATA[
The LHC@home servers will be down for a short while from 6:30 UTC Tuesday 10th November for a database update.
]]></description>
        <pubDate>Mon, 09 Nov 2015 07:51:18 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Service interruption tomorrow morning]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3964</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3964</guid>
        <description><![CDATA[
LHC@home servers will be down for about 1 hour tomorrow morning from 6am UTC, due to an intervention on the database server.
]]></description>
        <pubDate>Tue, 08 Sep 2015 09:07:23 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Server interruption 12 UTC]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3961</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3961</guid>
        <description><![CDATA[
The BOINC server will be down for maintenance for about 30 minutes from 12:00 UTC today.<br />
<br />
BOINC clients will back off and return results later once the server is up as usual.<br />
<br />
Many thanks for your contributions to LHC@home!
]]></description>
        <pubDate>Mon, 24 Aug 2015 06:36:44 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Brief Interruption, Thursday 18th June,2015]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3954</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3954</guid>
        <description><![CDATA[
There will be a hopefully brief interruption to the service tomorrow<br />
Thursday at 10:30 CST to provide separate NFS servers for SixTrack<br />
and Atlas. The WWW pages should still be accessible and a further<br />
message will be posted when the operation is complete. Eric and Nils.
]]></description>
        <pubDate>Wed, 17 Jun 2015 16:22:48 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Project down due to a server issue]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3952</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3952</guid>
        <description><![CDATA[
Due to a problem with an NFS server backend at CERN, the Sixtrack and ATLAS BOINC projects are down. A fix is underway.
]]></description>
        <pubDate>Thu, 11 Jun 2015 09:42:59 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[HostID 10137504 user aqvario]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3949</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3949</guid>
        <description><![CDATA[
HostID 10137504 owner aqvario.<br />
I set the max_results_day to -1; locking the stable door<br />
after the horse has bolted. For some reason I cannot read the<br />
messages I read this morning on this topic. Thanks for the<br />
help and the Google translation.   Eric.
]]></description>
        <pubDate>Sat, 06 Jun 2015 14:12:35 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Quorom of 5, wzero and Pentathlon]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3945</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3945</guid>
        <description><![CDATA[
I am currently running a set of very important tests to try and<br />
find the cause of a few numerical differences between different platforms<br />
and executables. I could/would not do this usually but because of your efforts<br />
during the Pentathlon I have a unique opportunity. Also keeps up the<br />
workload and gives you all an opportunity to get credits. <br />
These test are wzero with a quorum of 5.Thanks. Eric.
]]></description>
        <pubDate>Sun, 17 May 2015 14:32:09 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[DISK LIMIT EXCEEDED]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3944</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3944</guid>
        <description><![CDATA[
Please note that this may occur if you are also subscribed<br />
to the LHC experiment projects ATLAS or CMS using vLHCathome.<br />
A workround is to delete the remaining files yourself.
]]></description>
        <pubDate>Sat, 16 May 2015 19:03:40 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[New news on the BOINC Pentathlon]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3943</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3943</guid>
        <description><![CDATA[
Please look at the NEWS 15th May, 2015 for latest update <br />
involving the BOINC Pentathlon.   Eric.
]]></description>
        <pubDate>Fri, 15 May 2015 20:36:56 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[News 15th May, 2015]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3942</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3942</guid>
        <description><![CDATA[
As many of you know LHC@home has been selected to host<br />
the Sprint event of the BOINC Pentathlon organised by<br />
Seti.Germany. Information can be found at<br />
http://www.seti-germany.de/boinc_pentathlon/22_en_Welcome.html<br />
The event starts at midnight and will last for three days.<br />
<br />
This is rather exciting for us and will be a real test of<br />
our BOINC server setup at CERN. Although this is the weekend<br />
following Ascension my colleagues are making a big effort to<br />
submit lots of work, and I am seeing a new record number of active WUs<br />
every time I look. The latest number was over 270,000 and the Sprint<br />
has not yet officially started.<br />
<br />
We have done our best to be ready without making any last minute changes<br />
and while this should be fun I must confess to being rather worried<br />
about our infrastructure. We shall see.<br />
<br />
We still have our problems, for a year now.<br />
 <br />
I am having great difficulties building new executables since Windows XP<br />
was deprecated and I am now tring to switch to gfortran on Cygwin.<br />
It would seem to be appropriate to use the free compiler on our<br />
volunteer project.<br />
<br />
We are seeing too many null/empty result files. While an empty result can<br />
be valid if the initial conditions for tracking are invalid, I am hoping<br />
to treat these results as invalid. These errors are making it extremely<br />
difficult for me to track down the few real validated but wrong results.<br />
I have seen at least one case where a segment violation occurred, a clear<br />
error, but an empty result was returned. The problem does not seem to<br />
be OS or hardware or case dependent.<br />
<br />
I am also working on cleaning the database of ancient WUs. We had not<br />
properly deprecated old versions of executables until very recently.<br />
<br />
I am currently using boinctest/sixtracktest to try a SixTrack which will return the full results giving more functionality and also allowing a case to be automatically handled as a series of subcases.<br />
<br />
Then we must finally get back MacOS executables, AVX support, etc<br />
 <br />
Still an enormous amount of production is being carried out successfully<br />
thanks to your support. <br />
<br />
I shall say no more until we see how it goes for the next three days. Eric.
]]></description>
        <pubDate>Fri, 15 May 2015 20:34:20 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Short stoppage for a disk intervention]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3940</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3940</guid>
        <description><![CDATA[
The Sixtrack server will be down for a while this afternoon for a disk intervention. Clients will be able to upload results again soon.
]]></description>
        <pubDate>Thu, 30 Apr 2015 12:27:31 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Upgrade of the look and feel of the SixTrack website]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3938</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3938</guid>
        <description><![CDATA[
The <a href="http://lhcathomeclassic.cern.ch/sixtrack/" rel="nofollow">http://lhcathomeclassic.cern.ch/sixtrack/</a> website has been brought up to date with a new look and feel, which is consistent the other LHC@Home projects. It maintains all the links and the functionality of the previous one.
]]></description>
        <pubDate>Thu, 23 Apr 2015 12:20:31 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Status Result Differences 29th March, 2015]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3928</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3928</guid>
        <description><![CDATA[
Please have a look at my lates post to:<br />
Number Crunching/Host messing up tons of results.  Eric.
]]></description>
        <pubDate>Sun, 29 Mar 2015 16:30:48 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Server Intervention 10-Feb-2014]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3914</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3914</guid>
        <description><![CDATA[
There will be a short server interruption on Tuesday 10-Feb-2014 from 14:00-15:00 CET for a hardware upgrade. <br />
<br />
<br />
Update: The upgrade finished at 15:00 and the service is back up.
]]></description>
        <pubDate>Mon, 09 Feb 2015 10:17:46 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Uploads failing]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3911</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3911</guid>
        <description><![CDATA[
Apologies; disk full problem. Cleaning up and hoping to<br />
return to normal shortly. Thanks for all the messages. Eric.
]]></description>
        <pubDate>Thu, 29 Jan 2015 16:27:52 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[News, December, 2014.]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3904</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3904</guid>
        <description><![CDATA[
Well not much news really. The project is ticking over<br />
and we have processed a tremendous amount of work in 2014.<br />
<br />
Right now we are trying to move the project to a new CERN IT<br />
infrastructure so there may be a few hiccups in January<br />
(CERN is closed for two weeks, but systems are up and running).<br />
<br />
We are still using executables from May and I still don't have<br />
a valid MacOS executable :-( , no heartbeat so something is really<br />
wrong. Haven't found an explication for the "no permission/cannot acceess"<br />
problems on Windows but the overall error rate is about 1.5% which<br />
seems to be "normal". We have also had problems with the w- WUs<br />
which produced a lot of output, now under control. However running<br />
with a smaller number of pairs ro reduce volume of output seems<br />
to give problems with validation. Working on this.<br />
<br />
A New Year, so I shall try and make a big effort to get moving forward<br />
as we have been pretty well stuck for 9 months; after ten years I am<br />
a bit disappointed at the lack of progress. However, as usual, we must<br />
maintain the service as top priority.<br />
<br />
I have also noted increased interest from the experiments in using volunteer<br />
computing and this may impact lhcahomeclassic......<br />
<br />
Anyway, LHC is heading steadily to restart in the Spring, and we shall<br />
continue studying the High Luminosity upgrade. Many thanks for your<br />
patience and understanding and continued valued support.<br />
<br />
A Very Happy New Year.   Eric.
]]></description>
        <pubDate>Wed, 31 Dec 2014 11:03:50 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Season's Greetings]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3903</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3903</guid>
        <description><![CDATA[
I wish you a very Merry Christmas and<br />
a Happ[y|ier] New Year. Thanks for all<br />
your support (news to follow). Eric.
]]></description>
        <pubDate>Wed, 24 Dec 2014 15:11:54 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Heavy I/O on Windows WUs]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3893</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3893</guid>
        <description><![CDATA[
It sems WUs with names beginning w-.... are creating a bit<br />
much I/O for Windows. Under investigation, but the results<br />
are good and are required. Thanks.  Eric.
]]></description>
        <pubDate>Fri, 31 Oct 2014 19:58:01 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[17:00 CET, 15th October, Service back to "normal".]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3886</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3886</guid>
        <description><![CDATA[
I believe we have finally resolved various issues as<br />
of about 16:00 today. Apologies for the downtime. Eric.
]]></description>
        <pubDate>Tue, 14 Oct 2014 15:47:00 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CERN AFS problems]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3882</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3882</guid>
        <description><![CDATA[
We seem to be having intermittent? problems with our local<br />
file system. Server running but.....will fix soonest.
]]></description>
        <pubDate>Fri, 10 Oct 2014 15:54:03 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Service back; 5th October]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3880</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3880</guid>
        <description><![CDATA[
I think we are back in business. Lots of work coming, I hope,<br />
once we sort out the disk space issue. Sorry for all the hassle<br />
and thank you for your continued support.
]]></description>
        <pubDate>Sun, 05 Oct 2014 13:01:11 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Re-enabled daemons]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3879</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3879</guid>
        <description><![CDATA[
I have painfully cancelled all w-b3 WUs.  According to doc they<br />
stay in the database but are marked as  "not needed".<br />
I have also disabled further WUs of this type until we sort it out.<br />
Hope to have saved some 65,000 valid WUs. We shall see tomorrow.<br />
Please post to this thread if further problems (I have restarted as root...).<br />
It will probably take some time to get back to normal.<br />
Report will follow in due course.
]]></description>
        <pubDate>Sat, 04 Oct 2014 17:58:49 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Service disabled]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3878</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3878</guid>
        <description><![CDATA[
I have managed to stem the flood and disable the service.<br />
Apologies and will inform as soon as we are started again.
]]></description>
        <pubDate>Sat, 04 Oct 2014 08:42:20 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Disk Limit increased]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3874</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3874</guid>
        <description><![CDATA[
I  am unable to stop submission.<br />
I have upped the limit on disk space to 500MB.<br />
I can't do anything about active WUs but I hope the new limit<br />
will suffice for new WUs. More news tomorrow.
]]></description>
        <pubDate>Sat, 04 Oct 2014 00:02:30 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Disk Limit exceeded w-b3]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3873</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3873</guid>
        <description><![CDATA[
Drastic action being taken to delete the download WUs.<br />
This may crash the server....<br />
Apologies for the wasted CPU.
]]></description>
        <pubDate>Fri, 03 Oct 2014 22:35:08 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Power Supply Ripple]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3861</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3861</guid>
        <description><![CDATA[
Asequesed and for your information Miriam has described her<br />
recent studies as follows:<br />
A principal component of the planned upgrade to a high luminosity LHC (HI-LHC) is the replacement of the high field quadropole magnets - the so called "inner triplet". <br />
The long term beam stability can be significantly reduced by magnetic field errors, miasalignment of the magnets and by irregularities in the power supply (ripple). The recent batch of fifteen or so studies, involving over one and a half million cases or Work Units each of one million turns (for a stable beam), are aimed at determining the maximum allowable tolerances for the power supply ripple assuming the known field and alignment errors.<br />

]]></description>
        <pubDate>Tue, 22 Jul 2014 15:40:43 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[More on DOWNLOAD]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3860</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3860</guid>
        <description><![CDATA[
After running through the w- WUs I am now running<br />
a few test jobs as I think the WUs may have been OK.<br />
I cannot reproduce the problem (of course!) at CERN on my<br />
Windows 7 system.              Eric.
]]></description>
        <pubDate>Tue, 22 Jul 2014 15:38:21 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Download Errors located.]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3859</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3859</guid>
        <description><![CDATA[
ERR_DOWNLOAD problem located and there should be no more once this<br />
batch of dud WUs has been cleared. May be Monday before<br />
I can do anything else.          Eric.
]]></description>
        <pubDate>Sat, 19 Jul 2014 08:32:33 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[DOWNLOAD ERRORS]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3858</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3858</guid>
        <description><![CDATA[
Just noticed error rate has doubled to about 6% in<br />
last 24 hours. Seem to be ERR_RESULT_DOWNLOAD which I<br />
have confirmed my checking MBs right now. Any help/detailed<br />
info welcome while I notify CERN support. <br />
(Another Friday afternoon problem!)       Eric.
]]></description>
        <pubDate>Fri, 18 Jul 2014 14:24:19 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Three Problems, 22nd May.]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3838</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3838</guid>
        <description><![CDATA[
Settling down a bit; I am seeing around 2% WU failures.<br />
<br />
Problem 1: EXIT_TIME_LIMIT_EXCEEDED. Tried to minimise this<br />
and will hopefully implement "outliers" to avoid it in future.<br />
<br />
Problem 2: Can't Create Process and I will look for help on this.<br />
Probably connected with our build but we shall see.<br />
<br />
Problem 3: Found 545 invalid results involving 124 hosts.<br />
One invalid result was duplicated! but i am not going to run<br />
everything 3 times. Can live with this. The top 12 culprits gave<br />
77 45 26 25 22 21 19 16 14 11 10 9 invalid results each.<br />
(I thought we stopped using hosts with this many errors......)<br />
Seems to be hardware, overclocking, cosmic rays?????<br />
<br />
Getting a lot of production done successfully.       Eric.<br />

]]></description>
        <pubDate>Thu, 22 May 2014 16:15:39 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Status, 19th May, 2014]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3837</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3837</guid>
        <description><![CDATA[
Getting a lot of work done, but out of 400,000 WUs over the last seven days<br />
still have about 8000 errors (2% and decreasing I think). The main problem<br />
is EXIT_TIME_LIMIT_EXCEEDED but also "Can't create process". A side effect is<br />
a mess up with credits. I have increased the fpops bound to help, I hope, and<br />
today "reset credit statistics". Please be patient about credits and I shall see<br />
what happens and if we can compensate somehow.<br />
Unfortunately today I discovered a result difference, only one, but I need to<br />
do more checking. I see no invalid results so the former Linux/Windoes<br />
discrepancy is largely resolved. My priority is the integrity of the results<br />
and I may have to spend some days pinning down the result difference,<br />
checking various ifort versions, and doing more checks and tests.<br />
We have a macOS executable under test.<br />
Thank you for your patience, understanding and support.   Eric.<br />
(P.S. Getting correct identical results on any PC from a Pentium 3<br />
to the latest, with a multitude of versions of Linux, Windows and macOS<br />
is not easy! I can publish only when the LHC@home service is > 99%.<br />
Afterwards GPU, Android, and 10 million turns)
]]></description>
        <pubDate>Mon, 19 May 2014 20:07:05 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[CreateProcess problems]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3835</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3835</guid>
        <description><![CDATA[
I am seeing about 1% CreateProcess problems mainly on Windows 7.<br />
Most often Access Denied (in various languages :-).<br />
Also some Access violation, page out of date or similar.<br />
Found some BOINC mails about this. Under investigation.<br />
Seems to be host dependent.<br />
(More work coming sooon.)                Eric.
]]></description>
        <pubDate>Fri, 16 May 2014 10:20:54 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[LHC@home is back]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3833</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3833</guid>
        <description><![CDATA[
The service was restarted today and WUs should start<br />
coming in, building up gradually. Thanks to all. Eric.
]]></description>
        <pubDate>Wed, 14 May 2014 14:56:03 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[First production tests, 11th May, 2014]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3831</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3831</guid>
        <description><![CDATA[
Trying 590 WUs tonight. If all OK will restart full<br />
production tomorrow 12th May.    Eric.
]]></description>
        <pubDate>Sun, 11 May 2014 19:10:14 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Status, 10th May]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3830</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3830</guid>
        <description><![CDATA[
Please see MBs, Number Crunching, Status 10th May, Version 451.07
]]></description>
        <pubDate>Sat, 10 May 2014 09:26:51 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[WU Submission  SUSPENDED  19th April, 2014]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3827</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3827</guid>
        <description><![CDATA[
In order to avoid any further errors and waste of your valuable<br />
resources I have temporarily stopped WU submission. There are only<br />
a few thousand WUs active and when they are cleared I hope we will have<br />
new Windows executables. Sadly the Windows executables are now giving<br />
wrong results in many cases. I looked at using Homogeneous Redundancy<br />
but I would still get wrong results. I thought of removing the Windows<br />
executables but they are over 80% of our capacity. In this way I hope in<br />
a few days after users and support return from vacation we can safely<br />
introduce new Windows executables after tests using the BOINC test<br />
facility. Sorry about that but I would rather get it fixed properly as we<br />
have lots of new work coming.<br />
<br />
Thankyou for your patience and support.       Eric.
]]></description>
        <pubDate>Sat, 19 Apr 2014 11:05:24 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Status, March 2014]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3818</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3818</guid>
        <description><![CDATA[
First, in reply to a recent query about 2014 workload, thanks to Msssimo:<br />
"The majority of the 2014 studies will be devoted to LHC upgrade and the rest to understand the nominal<br />
machine. I do not expect any increase in workload when approaching the LHC re-start in 2015, on the<br />
other hand, we will all be locked up in the control room and the resources for performing the<br />
simulations will be reduced."<br />
<br />
Second, we have been experiencing major problems with our<br />
Windows executables for several months now.<br />
There are "small" result differences between Windows and Linux.<br />
After extensive testing I believe they are due to the Windows<br />
ifort compiler. This will be verified and fixed as soon as I<br />
return to CERN next week. In addition new builds of SixTrack<br />
for Windows, which now include a call boinc_unzip, are failing<br />
on Windows in at least two ways; there is a problem parsing the<br />
hardware description (/proc/cpuinfo on Linux) and secondly we<br />
get "cannot Create Process" errors. So, we shall first try and<br />
build without the hopefully resposible call, and fix the result<br />
differences. We can then resume development of the case splitting<br />
to smaller WUs and the return of all results.<br />
<br />
It is great that your support continues and, when required, we have<br />
lots of capacity. Saw a new record of over 140,000 WUs in<br />
process a couple of weeks ago.     Eric.<br />

]]></description>
        <pubDate>Sun, 16 Mar 2014 08:43:45 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Status, 24th January, 2014]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3803</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3803</guid>
        <description><![CDATA[
Hope this will answer some of your messages.<br />
<br />
We still have some 34,000 WUs NOT being taken. We have apparently<br />
almost 6000 in progress.<br />
<br />
We introduced SixTrack Version 4.5.03 on Wednesday 22nd<br />
January after extensive testing on boinctest and at CERN.<br />
Unluckily Yuri flooded us with work at the same time<br />
and AFS blew up leading to a huge backlog of over 16,000<br />
results to be downloaded.<br />
<br />
1. Results Validation;seems to be OK. I summarise that,<br />
countimg from 0-59 we do NOT CHECK Words 51, 59? and 60<br />
in fort.10.<br />
<br />
The validator log shows many many "cannot open" supposedly<br />
existing results for comparison. They were probably lost<br />
somehow.<br />
<br />
2. Assimilation; the log shows<br />
"Herror too many total results"  !!!<br />
There are about 2000 (1979) unique messages and cases/WUs.<br />
I suspect we may nedd to clean the database and remove results<br />
(with clients losing credit I am afraid, but they will probably never<br />
get credit for these anyway).<br />
I could delete them from upload but that would probably be worse.<br />
<br />
3. Scheduler log: there are about 2.4 million messages of which<br />
there are 1.64M unrecognised messages, multiple messages per WU.<br />
This is perhaps significant!<br />
previously these messages existed only for Macs as far as I can see.<br />
here is one case:<br />
2014-01-22 17:24:41.1073 [PID=51877]   HOST::parse(): unrecognized: opencl_cpu_prop<br />
2014-01-22 17:24:41.1075 [PID=51877]   HOST::parse(): unrecognized: platform_vendor<br />
2014-01-22 17:24:41.1075 [PID=51877]   HOST::parse(): unrecognized: Advanced Micro Devices, Inc.<br />
2014-01-22 17:24:41.1075 [PID=51877]   HOST::parse(): unrecognized: /platform_vendor<br />
2014-01-22 17:24:41.1075 [PID=51877]   HOST::parse(): unrecognized: opencl_cpu_info<br />
2014-01-22 17:24:41.1075 [PID=51877]   HOST::parse(): unrecognized: name<br />
2014-01-22 17:24:41.1075 [PID=51877]   HOST::parse(): unrecognized: Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz<br />
2014-01-22 17:24:41.1075 [PID=51877]   HOST::parse(): unrecognized: /name<br />
2014-01-22 17:24:41.1076 [PID=51877]   HOST::parse(): unrecognized: vendor<br />
2014-01-22 17:24:41.1076 [PID=51877]   HOST::parse(): unrecognized: GenuineIntel<br />
2014-01-22 17:24:41.1076 [PID=51877]   HOST::parse(): unrecognized: /vendor<br />
2014-01-22 17:24:41.1076 [PID=51877]   HOST::parse(): unrecognized: vendor_id<br />
2014-01-22 17:24:41.1076 [PID=51877]   HOST::parse(): unrecognized: 4098<br />
2014-01-22 17:24:41.1076 [PID=51877]   HOST::parse(): unrecognized: /vendor_id<br />
2014-01-22 17:24:41.1076 [PID=51877]   HOST::parse(): unrecognized: available<br />
2014-01-22 17:24:41.1076 [PID=51877]   HOST::parse(): unrecognized: 1<br />
2014-01-22 17:24:41.1076 [PID=51877]   HOST::parse(): unrecognized: /available<br />
2014-01-22 17:24:41.1076 [PID=51877]   HOST::parse(): unrecognized: half_fp_config<br />
2014-01-22 17:24:41.1076 [PID=51877]   HOST::parse(): unrecognized: 0<br />
2014-01-22 17:24:41.1076 [PID=51877]   HOST::parse(): unrecognized: /half_fp_config<br />
2014-01-22 17:24:41.1077 [PID=51877]   HOST::parse(): unrecognized: single_fp_config<br />
2014-01-22 17:24:41.1077 [PID=51877]   HOST::parse(): unrecognized: 191<br />
2014-01-22 17:24:41.1077 [PID=51877]   HOST::parse(): unrecognized: /single_fp_config<br />
2014-01-22 17:24:41.1077 [PID=51877]   HOST::parse(): unrecognized: double_fp_config<br />
2014-01-22 17:24:41.1077 [PID=51877]   HOST::parse(): unrecognized: 63<br />
2014-01-22 17:24:41.1077 [PID=51877]   HOST::parse(): unrecognized: /double_fp_config<br />
2014-01-22 17:24:41.1077 [PID=51877]   HOST::parse(): unrecognized: endian_little<br />
2014-01-22 17:24:41.1077 [PID=51877]   HOST::parse(): unrecognized: 1<br />
2014-01-22 17:24:41.1077 [PID=51877]   HOST::parse(): unrecognized: /endian_little<br />
2014-01-22 17:24:41.1077 [PID=51877]   HOST::parse(): unrecognized: execution_capabilities<br />
2014-01-22 17:24:41.1078 [PID=51877]   HOST::parse(): unrecognized: 3<br />
2014-01-22 17:24:41.1078 [PID=51877]   HOST::parse(): unrecognized: /execution_capabilities<br />
2014-01-22 17:24:41.1078 [PID=51877]   HOST::parse(): unrecognized: extensions<br />
2014-01-22 17:24:41.1078 [PID=51877]   HOST::parse(): unrecognized: cl_khr_fp64 cl_amd_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_kh<br />
2014-01-22 17:24:41.1078 [PID=51877]   HOST::parse(): unrecognized: /extensions<br />
2014-01-22 17:24:41.1153 [PID=51877]   HOST::parse(): unrecognized: global_mem_size<br />
2014-01-22 17:24:41.1153 [PID=51877]   HOST::parse(): unrecognized: 17029206016<br />
2014-01-22 17:24:41.1153 [PID=51877]   HOST::parse(): unrecognized: /global_mem_size<br />
2014-01-22 17:24:41.1153 [PID=51877]   HOST::parse(): unrecognized: local_mem_size<br />
2014-01-22 17:24:41.1153 [PID=51877]   HOST::parse(): unrecognized: 32768<br />
2014-01-22 17:24:41.1153 [PID=51877]   HOST::parse(): unrecognized: /local_mem_size<br />
2014-01-22 17:24:41.1153 [PID=51877]   HOST::parse(): unrecognized: max_clock_frequency<br />
2014-01-22 17:24:41.1154 [PID=51877]   HOST::parse(): unrecognized: 3500<br />
2014-01-22 17:24:41.1154 [PID=51877]   HOST::parse(): unrecognized: /max_clock_frequency<br />
2014-01-22 17:24:41.1154 [PID=51877]   HOST::parse(): unrecognized: max_compute_units<br />
2014-01-22 17:24:41.1154 [PID=51877]   HOST::parse(): unrecognized: 8<br />
2014-01-22 17:24:41.1154 [PID=51877]   HOST::parse(): unrecognized: /max_compute_units<br />
2014-01-22 17:24:41.1154 [PID=51877]   HOST::parse(): unrecognized: opencl_platform_version<br />
2014-01-22 17:24:41.1155 [PID=51877]   HOST::parse(): unrecognized: OpenCL 1.2 AMD-APP (1348.5)<br />
2014-01-22 17:24:41.1155 [PID=51877]   HOST::parse(): unrecognized: /opencl_platform_version<br />
2014-01-22 17:24:41.1155 [PID=51877]   HOST::parse(): unrecognized: opencl_device_version<br />
2014-01-22 17:24:41.1155 [PID=51877]   HOST::parse(): unrecognized: OpenCL 1.2 AMD-APP (1348.5)<br />
2014-01-22 17:24:41.1155 [PID=51877]   HOST::parse(): unrecognized: /opencl_device_version<br />
2014-01-22 17:24:41.1155 [PID=51877]   HOST::parse(): unrecognized: opencl_driver_version<br />
2014-01-22 17:24:41.1155 [PID=51877]   HOST::parse(): unrecognized: 1348.5 (sse2,avx)<br />
2014-01-22 17:24:41.1155 [PID=51877]   HOST::parse(): unrecognized: /opencl_driver_version<br />
2014-01-22 17:24:41.1155 [PID=51877]   HOST::parse(): unrecognized: /opencl_cpu_info<br />
2014-01-22 17:24:41.1156 [PID=51877]   HOST::parse(): unrecognized: /opencl_cpu_prop<br />
2014-01-22 17:24:41.3583 [PID=51877]   Request: [USER#221474] [HOST#10137513] [IP 69.35.195.242] client 7.2.33<br />
2014-01-22 17:24:41.3880 [PID=51877]    Sending reply to [HOST#10137513]: 0 results, delay req 6.00<br />
2014-01-22 17:24:41.3880 [PID=51877]    Scheduler ran 0.035 seconds<br />
<br />
I am not an expert but it seems to me it might explain work not being taken.......<br />
(but never saw this with boinctest!).<br />
<br />
Other issue; one client reports "Cannot Create Process" mon Windows 7.<br />
May or may not be significant.<br />
<br />
Are executables 'signed" OK?<br />
<br />
So all a bit complicated but hope to sort it (very) soon.<br />
    Eric.
]]></description>
        <pubDate>Fri, 24 Jan 2014 12:34:09 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Hiccup, today 23rd January]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3802</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3802</guid>
        <description><![CDATA[
Apologies for an interruption to service.<br />
Working on it. More news when corrected.<br />
Eric.
]]></description>
        <pubDate>Thu, 23 Jan 2014 08:45:25 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Publications Update]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3790</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3790</guid>
        <description><![CDATA[
The WWW page<br />
   http://lhcathome.web.cern.ch/sixtrack/sixtrack-and-numerical-simulations<br />
has been updated by Massimo with new recent publications concerning LHC@home.
]]></description>
        <pubDate>Tue, 19 Nov 2013 15:52:20 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[News Status and Plans 19th November, 2013]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3789</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3789</guid>
        <description><![CDATA[
Please see the MB Number Crunching for an update. Eric.
]]></description>
        <pubDate>Tue, 19 Nov 2013 08:09:05 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Problem October 23rd Fixed]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3783</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3783</guid>
        <description><![CDATA[
The permissions on the directory for the logs was wrong.<br />
Corrected and results being uploaded. A fuller report and<br />
a new Status and Plans will be issued soonest. 
]]></description>
        <pubDate>Thu, 24 Oct 2013 08:16:11 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Problems 23rd October, 2013]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3782</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3782</guid>
        <description><![CDATA[
Sorry for the upload problems. Hope somebody here will<br />
fix this soon. (I thought we had a new record number<br />
of WUs in progress! :-)    Eric.
]]></description>
        <pubDate>Wed, 23 Oct 2013 17:24:26 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Status, 13th September, 2013]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3771</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3771</guid>
        <description><![CDATA[
Still fighting to produce a good set of Linux executables.<br />
Lots of work for Windows systems!<br />
Created some notes on Numerical reproducibility<br />
[url=http://cern.ch/mcintosh]CV and Notes on Floating-Point[url].
]]></description>
        <pubDate>Fri, 13 Sep 2013 06:16:34 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Status 6th September]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3769</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3769</guid>
        <description><![CDATA[
New thread as feedback is in several others.<br />
I have resolved server out of space for the short term and<br />
we will implement a proper fix soonest.<br />
<br />
Issue remains with Linux executables I think. I have checked and<br />
informed my colelagues. The ".exe" suffix is confusing but the pni<br />
executables look OK (crash on my test machine without pni of<br />
course, but OK on my modern one).We do not hae a MAC executable<br />
yet. <br />
<br />
Now things have settled down we pursue an analysis of the problem(s).<br />
I do not want to go back because we urgently need the new physics in<br />
this version.<br />
<br />
Thanks for your patience and undersatnding Getting lots of results<br />
anyway.                 Eric.<br />
<br />

]]></description>
        <pubDate>Fri, 06 Sep 2013 12:18:44 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[New SixTrack]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3766</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3766</guid>
        <description><![CDATA[
SixTrack CERN Version 4463 is now in production.
]]></description>
        <pubDate>Wed, 04 Sep 2013 07:39:01 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Testing]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3764</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3764</guid>
        <description><![CDATA[
Just running "last" tests. Hope to have new SixTrack tomorrow.
]]></description>
        <pubDate>Mon, 02 Sep 2013 19:03:12 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Short Failing Work Units]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3763</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3763</guid>
        <description><![CDATA[
We are tyring to use the test option of BOINC SixTrack project.<br />
The very short WUs are failing. We have a fix and shall try agian<br />
soon. More production to follow. Thanks for your patience.<br />
Eric.
]]></description>
        <pubDate>Sun, 01 Sep 2013 06:00:46 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Staus and Plans, 30th August, 2013]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3759</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3759</guid>
        <description><![CDATA[
Please see Message Boards: Number Crunching: Status and Plans 20th August, 2013<br />
(Sorry about date!).  Eric.
]]></description>
        <pubDate>Fri, 30 Aug 2013 12:56:30 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[May, 2013 update.]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3744</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3744</guid>
        <description><![CDATA[
Server down (temporarily I hope).  Trying to fix the "unzip" problem.<br />
See my recent posts to Number Crunching: Status and Plans May 25th,<br />
and Results Discrepancies for more info.     Eric.
]]></description>
        <pubDate>Sat, 25 May 2013 11:04:18 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[More work coming now.]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3737</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3737</guid>
        <description><![CDATA[
We have introduced a new SixTrack Version 4446 and I am resuming<br />
production on an intensity scan as well as running more tests; usual<br />
mixture of short/long run times. We are also trying to return more<br />
results files to help identify problems.   Thanks for your help as usual.<br />
Eric.
]]></description>
        <pubDate>Wed, 08 May 2013 17:56:27 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Dynamic Aperture Tune Scan]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3731</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3731</guid>
        <description><![CDATA[
Hello everybody, <br />
after some few technical problem in the last few days, we are now ready to submit a first Tune Scan for the Dynamic Aperture study we are performing at CERN.<br />
This simulations will give us a first hint on how the HighLuminosity upgrade for the LHC will work, and in particular the effect of the Beam-Beam interaction will be analysed.<br />
This will be only the first bunch of simulations, because various scenario are possible for this upgrade, and we need to deeply investigate each one of them to decide which one is the one that better fit our requirements...so keep you machine <font color="red">ready to crunch!!</font>
]]></description>
        <pubDate>Fri, 15 Mar 2013 09:35:53 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Interruption for server update]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3729</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3729</guid>
        <description><![CDATA[
There will be a short server interruption today for a software update. New jobs should come later once we have checked the software chain. <br />
<br />
The update is now done. Thanks for your contributions and have a nice day!
]]></description>
        <pubDate>Sun, 10 Mar 2013 09:07:24 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Forum restrictions]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3724</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3724</guid>
        <description><![CDATA[
Due to spam activity, all forums apart from Questions & Answers: Getting Started now requires some BOINC credit to allow posting. If you are a complete newcomer, please check existing Questions & Answers first.<br />
<br />
The team.
]]></description>
        <pubDate>Fri, 15 Feb 2013 12:58:25 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Pause]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3687</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3687</guid>
        <description><![CDATA[
There will be a pause for a week or two.<br />
See the News (no ) "More work" thread for more info.
]]></description>
        <pubDate>Fri, 08 Feb 2013 15:10:03 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[More work]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3661</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3661</guid>
        <description><![CDATA[
Can't keep up but more work coming now.
]]></description>
        <pubDate>Sun, 03 Feb 2013 04:34:00 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Production 2013]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3652</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3652</guid>
        <description><![CDATA[
Great; as you will have seen running flat out on intensity scans, one million turns max.<br />
Over 100,000 tasks running! CERN side infrastructure is creaking at the seams.<br />
Will run down in a week or two to introduce a new SixTrack version (with suitable<br />
warning). 
]]></description>
        <pubDate>Thu, 31 Jan 2013 11:07:55 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[First tests 2013]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3571</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3571</guid>
        <description><![CDATA[
Trying to run a few thousand cases from Scinetific Linux 6 (SLC6)<br />
here at CERN. Eric.
]]></description>
        <pubDate>Fri, 11 Jan 2013 12:09:25 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[A Happy New Year]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3564</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3564</guid>
        <description><![CDATA[
Thanks for all the support in 2012 (and before). Further delay due to a Power Cut<br />
PC broken and the CERN annual closure for two weeks. Once again more detailed<br />
information when I have recovered. So a Happy New Year and I am hoping for<br />
an even better 2013.
]]></description>
        <pubDate>Sun, 06 Jan 2013 11:39:19 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Problems/Status 28th November, 2012 and PAUSE]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3557</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3557</guid>
        <description><![CDATA[
Discovered some problems with result replication! and run out of<br />
disk space at CERN. There will be a pause, for a few days at least,<br />
while I investigate and resolve. (Wil post details soonest to the<br />
MB Number Crunching.) Eric
]]></description>
        <pubDate>Wed, 28 Nov 2012 17:17:55 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Status, Thursday 15th November]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3553</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3553</guid>
        <description><![CDATA[
Hiccup; mea culpa. On vacation and travelling since Tuesday<br />
and ran out of disk space in BOINC buffer at CERN :-(<br />
I think all is OK again now after corrective actions and more work<br />
is on the way. Sorry about that.        Eric.
]]></description>
        <pubDate>Thu, 15 Nov 2012 07:44:40 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Status and Plans, Sunday 4th November]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3545</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3545</guid>
        <description><![CDATA[
First service continues to run well; the first intensity scan is nearing completion with well over a million results in 15 studies successfully returned. Just a couple of hundred thousand more!<br />
(Sadly no one study is complete but a couple are very close and I shall start post-processing and analysis soon. I am still reflecting on the thread "Number crunching; WU not being sent to another user".<br />
This is not easy, trying to get studies complete, but keeping the system busy. I am the "feeder" and since in the end I need all the studies I am rather prioritising keeping WUs available.) <br />
<br />
Just checked and we have over 80,000, yes eighty thousand WUs active and this is a new (recent) record.<br />
<br />
Draft documentation of the User side is now available thanks to my colleague R. Demaria. If you are interested<br />
 [url=SixDesk Doc]http://sixtrack-ng.web.cern.ch/sixtrack-ng/[/url]<br />
and I hope you can access it (otherwise I shall put a copy to LHC@home).<br />
<br />
Right now I hope to try new executables with new physics on our test server and I mght shortly appeal for some volunteers to help (and also to run a few more 10 million turn jobs). I do NOT want to risk the production service while it is running so smoothly.<br />
<br />
Otherwise (At Last!) I shall start writing my paper on how to get identical results on ANY IEEE 754 hardware with ANY standard compiler<br />
at ANY level of Optimisation. Thanks to all.        Eric.<br />

]]></description>
        <pubDate>Sun, 04 Nov 2012 15:08:44 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Status and Plans, Saturday 29th September, 2012]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3537</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3537</guid>
        <description><![CDATA[
All running very smoothly indeed. Just a problem with deadline scheduling which I hope we can discuss and resolve on Monday, especially with some feedback from the BOINC meeting in London.<br />
Also some hiccups on the CERN AFS infrastructure.<br />
I am now hoping to prioritise the writing of my paper on numeric results reproducibility but I am continuing to run work for the next weeks as described in my new thread "Work Unit Description"<br />
in the Message Board "Number Crunching".<br />
I am also pondering how to best handle "very long"<br />
jobs bearing in mind your feedback.<br />
And of course I shall try and keep you informed.<br />
<br />
Thankyou for your continued support.  Eric.<br />
<br />

]]></description>
        <pubDate>Sat, 29 Sep 2012 11:20:02 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Status, Sunday 9th September, 2012.]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3526</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3526</guid>
        <description><![CDATA[
All running well still. One user reports "Maximum Elapsed Time Exceeded" though<br />
on several, all? of his, WUs.<br />
Still checking for MacOS results but no<br />
further complaints at the moment.<br />
<br />
I present some basic info.<br />
<br />
There have been several changes to URLs and Servers outwith my control. The correct site is http:lhcathomeclassic.cern.ch/sixtrack/<br />
This can indeed be found easily from LHC@home and then The Sixtrack Project (rather than Test4Theory). The current server is boinc05@cern.ch.<br />
<br />
I define "normal" WUs as 10**5/100,000 turns but remember all particles may be lost after an arbitrary number of turns, sometimes, even just a few turns at large amplitudes.<br />
Long WUs are 10**6 or one million turns and very Long WUs<br />
10**7 or 10 million turns, and who knows maybe one day 10**8 turns. <br />
That depends on how the floating-point error accumulates and at which point the loss/increase of energy and loss of symplecticity invalidate the results. It will be exciting to find out.<br />
<br />
For Functionality, Reliability and Performance.<br />
While waiting for the LXTRACK user node and the second server for test and backup (I assume they will finally get approved!):<br />
<br />
Functionality; adequate for the moment. It would be good to have a priority system, three levels. <br />
1. Run first, after other Level 1.<br />
2. Normal; queue after Level 1 and before Level 3.<br />
3. Run only if No Level 1/2 tasks queued.<br />
<br />
I am thinking in terms of running 10**7 jobs as a series of 10**6 jobs. This requires returning and submitting more data, the fort.6 output and the checkpoint/restart files as a minimum. This would be very good additional functionality in itself.<br />
<br />
Reliability; pretty good but needs the backup server, LXTRACK, and less reliance on CERN AFS..<br />
Should provide a quick test (1 or 2 minutes) to verify the node produces correct results without running the whole WU. This would not obviate result validation but would avoid wasting resources.<br />
I could also provide a longer test on the WWW with canonical results that any volunteer could run if he suspects he has over-clocked or is getting results rejected.<br />
<br />
Performance; pretty good now with SSE2, SSSE3, PNI or whatever.<br />
Should implement GPU option. Should measure the cost of the numeric portability.  <br />
(Incidentally Intel are hosting a Webinar on this topic on Wednesday, but I guess it will address only Intel H/W.)<br />
<br />

]]></description>
        <pubDate>Sun, 09 Sep 2012 15:57:05 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Status, 2nd September, 2012]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3518</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3518</guid>
        <description><![CDATA[
Well all seems to be running rather well as seen from the<br />
CERN side. So I present the topics for review on Tuesday.<br />
1. IT report on LXTRACK proposal (to greatly improve facilities for the<br />
physicists including more disk space and much improved reliability).<br />
2. Proposal for a second "test" server (to test very long jobs, to try returnig<br />
the full results, without affecting the current service).<br />
3. Project Status and open issues from the MBs:<br />
  a) More buffered work (user request).<br />
  b) Access to boinc01! Apparently some attempts to contact this obsolete service.<br />
Could be WWW pointers or what.<br />
  c) HTTP problems, one user? (I need to send byte count and MD5 checksum.)<br />
  d) MacOS executable. Open issue; works for some people.<br />
  e) Deadline scheduling Seems that work is deleted because volunteers fear their<br />
contribution will be wasted. But is this true? I have 99.999% results OK but how many<br />
WUs were not credited............<br />
  f) GPU enabled SixTrack <br />
4. A.O.B. including Date and time for a small party and the invitation list<br />
to celebrate recent progress and the many helpful comments and suggestions.<br />

]]></description>
        <pubDate>Sun, 02 Sep 2012 15:34:14 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Status, 26th August, 2012]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3513</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3513</guid>
        <description><![CDATA[
MacOS executable is working, for some at least.<br />
I have queued 500,000 jobs, intensity scan,<br />
while I clear the decks. Many thanks for all the<br />
suggestions and comments on (very) long jobs.
]]></description>
        <pubDate>Sun, 26 Aug 2012 13:47:18 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Very long jobs]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3511</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3511</guid>
        <description><![CDATA[
I am now going to submit just a few hundred very<br />
log 10**7 turn jobs to complete two studies.<br />
I think this will be OK now; we shall see.
]]></description>
        <pubDate>Wed, 22 Aug 2012 15:48:40 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Credits]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3509</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3509</guid>
        <description><![CDATA[
Please see the Message Board Number Crunching, Thread Credits for some<br />
hopefully good news from Igor.
]]></description>
        <pubDate>Mon, 20 Aug 2012 19:15:57 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Status, 19th August, 2012.]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3507</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3507</guid>
        <description><![CDATA[
All is running rather well; over 100,000 tasks queued, and over 56,000 running. I have a bit more work prepared, but badly need to do some analysis. After some flak, we have been receiving many messages of support and also a lot of help in identifying the problem with the MAC executable.<br />
<br />
Igor has identified and corrected the problem with Credits and is still cleaning up and trying to repair. <br />
(This was my fault; trying to run 10**7 turn jobs taking 80 hours.<br />
However I can report that 99% of them have completed successfully,<br />
and others are still active.)<br />
<br />
The Mac executable issue may even be solved, but we need to watch for the next days still.<br />
<br />
There may be a problem with Deadlines....we shall see.<br />
<br />
I am waiting for PC support to install my NVIDIA TESLA, memory and upgraded power supply, and Linux. I am ready to install the software next and try Tomography. There is some interest in ABP especially for existing MPI applications. We shall see.<br />
<br />
I have STILL NOT finished the SixDesk doc or prepared the tutorial.<br />
<br />
I take this opportunity to outline the LXTRACK system: I hope IT support could fill in the details and do it.<br />
<br />
The justification is that AFS limitations and problems have made life very difficult. <br />
I have used my desk side pcslux99 (thanks to Frank who donated it) as a protoptype to run several hundred thousand jobs over the last few weeks. <br />
Sadly I do not have the LSF commands like bjobs and bsub, as it as an old 32-bit machine, and I am NOT wanting to become a sysadmin again. It has almost 200GB of disk space of which I am using only 12% but increasing. Under this setup I have virtually no problems and do everything with the SixDesk scripts called from master scripts in acrontab entries.<br />
<br />
LXTRACK should be a "standard" lxplus Virtual machine i.e. with LSF and CASTOR and SVN and AFS etc etc. BUT with at least a Terabyte of disk space NON AFS, /data, say. Only users in the AFS PTS Group boinc_users should be allowed to login. <br />
(We could even create the /data/$LOGNAME directory for them.) How can we manage this space?  Given the small number of cooperative users a script to monitor is probably adequate. <br />
Processes shoul NOT be killed for exceeding CPU or real time limits. <br />
Later, ideally, we could possibly create non_AFS buffers for communication with BOINC.<br />

]]></description>
        <pubDate>Sun, 19 Aug 2012 14:25:12 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[MacOS Executable]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3506</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3506</guid>
        <description><![CDATA[
(Re-)activated MacOS executable built on MacBook PRO.<br />
Will be watching closely for errors.  Eric and Igor.
]]></description>
        <pubDate>Thu, 16 Aug 2012 15:13:42 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Status 12th August]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3504</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3504</guid>
        <description><![CDATA[
All is running rather well from CERN side and I have initiated an intensity scan to run while I work a bit on the GPU. I have a real time deadline and I<br />
must try this over the next two weeks. In spite of a couple of issues<br />
with the CERN infrastructure I have still managed to queue over 90,000 Work Units as part of an Intensity scan (different bunch sizes and charge).<br />
<br />
We are getting flak about credits or points. One obscene message I tried to hide, but the user said he got only<br />
200 points for 80 hours when he expected at least 1000, and another user 62.70 points for 110 hours. So we lost a couple of volunteers, but we are also getting support with over 40,000 active Work Units.<br />
<br />
There is also an issue with the real time deadline for my 10 million turn jobs.<br />
<br />
I hope to fix the MAC executable next week with my colleague.<br />

]]></description>
        <pubDate>Sun, 12 Aug 2012 11:51:47 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Status, 12th August]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3503</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3503</guid>
        <description><![CDATA[
Please see the NEWS Message Board.
]]></description>
        <pubDate>Sun, 12 Aug 2012 11:43:55 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Status/Plans, 7th August 2012]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3497</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3497</guid>
        <description><![CDATA[
First, many thanks for your continued support. From my/CERN side all has been running rather well and I am submerged by results.<br />
I now need to take some time to analyse them. In particular to decide between the two methods of computing the beam - beam effect.<br />
Then I shall probably submit several studies to do an intensity scan where I study the beam - beam effect depending on the size, and hence charge, of the accelerated bunch of particles.<br />
<br />
At the same time, I must finish the documentation of the "user"<br />
infrastructure so that my colleagues may easily use BOINC as they return from vacation. In addition I want to set up a dedicated "user" system "lxtrack" in order to provide disk space here and to try and keep up with the results as they are returned.<br />
<br />
I have to look at the Deadline problem for 10**7 turn jobs.<br />
I set a bound of 30 days for any WU....need to discuss with Igor is that is NOT what you see at home.  Of course we really want a low bound to get results back quickly, but I also want to use older slower systems. We shall have to work out some sort of compromise. My attempt as 10**7 turns was probably a bit over the top, but I was keen to try it.<br />
<br />
We hope/expect to produce a valid MAC executable this week. I also need to add some new "physics", new elements, to Sixtrack as provided by a colleague. (Also need to add modifications for "Collimation" but they are not relevant to BOINC.)<br />
The next version should also support SSE4.1.<br />
<br />
I was very pleasantly surprised to win an NVIDIA TESLA C2075.<br />
The catch is that I have to use it and program it with OpenACC. There will doubtless be some hiccups intsalling the board and the necessary<br />
(PGI) software. I shall in fact try my "Tomography" application which already runs in parallel using HPF or openMP. If that works I shall seriously consider a multi-threaded Sixtrack (using GPUs or not) by tracking many more particles in each Work Unit. Non-trivial but rather exciting. I am just at the ideas stage here, but.....it would of course use multiple threads on a multiple core PC as well. A dream?<br />
<br />
Finally, I have to take time to publish my work on floating-point portability and reproducibility. I believe I might be the only person who gets identical bit for bit 0 ULP different results after many Gigaflops with 5 different Fortran compilers at different levels of optimisation.<br />
<br />
<br />
<br />

]]></description>
        <pubDate>Tue, 07 Aug 2012 15:18:31 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[MAC executable]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3486</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3486</guid>
        <description><![CDATA[
STOP PRESS: Trying a new prototype executable for MACs.<br />
Built with ifort defaults on a macBook Pro (using sse3 I guess).<br />
Eric and Igor.
]]></description>
        <pubDate>Mon, 16 Jul 2012 14:29:10 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Server status]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3474</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3474</guid>
        <description><![CDATA[
My colleague has cleaned database and I think that is the end of http errors etc etc.<br />
I have submitted new work and I am always getting results anyway. There is still a whole<br />
bag of worms around sse2 sse3 ssse3 pni and whatever, not helped by Intel's ifort<br />
refusal to run optimised code on non-Intel hardware.<br />
<br />
Igor has much improved version distribution and some people are getting "PNI"<br />
versions. The important thing is that SSE2 upwards is much faster than the generic<br />
version. Don't want to waste resources. All versions are completely numerically<br />
portable (I hope so)  but when panic is over I shall be looking at all rejected  results<br />
as I believe they are due to hardware failures (over-clocked?).<br />
<br />
If all goes well I shall try and issue an update to whatever happened to lhc@home<br />
this weekend. <br />
<br />
In the meantime someone has changed the WWW pages, or whatever and I don't even know if you<br />
can read this. All my bookmarks failed and usual start page NOT available.<br />
<br />
Eric (from his new super MAC notebook pro, bought at great personal expense,<br />
but have never had the time to set up. I am going to try and install BOINC now.)<br />

]]></description>
        <pubDate>Wed, 11 Jul 2012 17:58:09 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Server/Executable problems]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3455</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3455</guid>
        <description><![CDATA[
An exciting day; a new particle and maybe even the Higgs boson itself.<br />
<br />
We have been busy preparing new executables for BOINC, including a MAC <br />
executable.<br />
<br />
Sadly we have run out of disk space and there are likely to be some hiccups <br />
for the next few hours, hopefully not longer. We have three new executables for <br />
both Windows and Linux: run anywhere, use SSE2, use SSE3. The run anywhere is <br />
slow but every little helps. The executable for MAC requires at least SSE3 I <br />
believe and the exact requirements are not well understood as I write.<br />
I am currently running tests on as many types of hardware as I can.<br />
<br />
The disk full situation can cause havoc and certainly explains why you have <br />
not been able to get more work for the last hours.<br />
More news as soon as we make some progress.<br />
<br />
Thanks for your continued support which will to make an even better<br />
LHC for 2015.              Eric.<br />

]]></description>
        <pubDate>Wed, 04 Jul 2012 12:05:36 GMT</pubDate>
        </item>
    <item>
        <title><![CDATA[Sixtrack server migration today]]></title>
        <link>https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3441</link>
        <guid isPermaLink="true">https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=3441</guid>
        <description><![CDATA[
Dear volunteers,<br />
<br />
The Sixtrack BOINC project has been migrated to a new server today.  If you should encounter any difficulties with the setup, please detach from the project and attach again.<br />
<br />
BOINC and Sixtrack should be fully operational again from 2PM CET. (12:00 UTC)<br />
<br />
<br />
Best regards, the BOINC service team.
]]></description>
        <pubDate>Tue, 05 Jun 2012 10:55:57 GMT</pubDate>
        </item>
    
        </channel>
        </rss>
    