InfoMessage
21) Message boards : CMS Application : How do I set up a local HTTP proxy?
Message 51247
Posted 5 Dec 2024 by ProfileGuy
Yes, Thank you.

I have set up my LHC@home to run just 1 CMS task at a time with this app_config.xml
1 multi-threaded CMS task at a time is all my 8 core CPU can manage comfortably - a minimum of 4 cores for each individual task is a project requirement. So theoretically I could run 2 CMS tasks at a time. I have actually. But using all 8 cores of my CPU for crunching leaves no room for background OS tasks to run and overall the system slows down.
Anyhoo. I was using squid successfully on Windows 10 a few months ago before I migrated my BOINC projects to Linux. Because of that I feel that squid helps. In fact it's really cool - if you can set it up properly.
I read the very helpful HowTo set up a local Squid and had little more to do than enter my ip address where it said.

Now it's not working on my linux box (Tumbleweed 15.6). It caches what it's supposed to and passes through everything else. But nothing uploads.
Not a major problem because you can just stop your BOINC client using the squid http proxy in the "Options -> Other options" -> HTTP Proxy tab and it resumes communicating - as if it wasn't there in the first place with no problems at all.
I'd like to get it to work though.
There are a lot of changes posted in the new comments thread. I've tried to implement them. I've even had a look at firewalls. I'm not sure what the right thing to do is.

And I'm uncertain which LHC@home jobs use the proxy now.

Any insights, help would be welcome.
Thanks
22) Message boards : CMS Application : Short CMS-Tasks ok?
Message 51246
Posted 5 Dec 2024 by ProfileGuy
The server hardware is being swapped around. Major reconfigurations are taking place. In other words - the machines the volunteers connect to are taken off line sporadically during this.
There is no work is available while the entire LHC@home crew are all busy doing this There's just no need to cater for generating the BOINC volunteers work while the system is mostly off line.
The work units that 'pop out' are just empty data transport vehicles with no actual LHC@home data for crunching in them. The data transport system is functioning but there are no "passengers", or in this case data in them.

There are gaps but often there is work available from the LHC@home project during this maintenance. I still have a very long Theory job running from last week. Last week saw ATLAS jobs available too. Sometimes you get all three at the same time. CMS, ATLAS & Theory!

These empty tasks are, some think, a bit of a waste of time. Stop them if you want.
I stopped pulling CMS work units a few days ago.
To do this:
Click on the "Project" item in the menu bar at the top of any LHC@home web page.
In the drop down list select "Preferences".
Click "Edit preferences".
Un-check the "CMS Simulation".
Un-check "If no work for selected applications is available, accept work from other applications?" (Leave everything else alone!)
Click "Update preferences".

At this point all you can do is keep an eye on the CMS Application (this) message board for news of new work available.
You could look at the "Computing -> Server status" page - but it doesn't say if the jobs are hollow or not. Check the message boards.

On the technical side, for example -
The errors I found logged in the stderr output generated by the various CMS simulations I downloaded revealed one LHC@home server after another going off line and coming back on again while the crew worked. Each job generates this stderr on the Cern servers upon completion.
To find this particular stderr output - yes, there's more than one for your task - (It's best to do this in another browser tab while you read instructions here)
Click on the "Project" item in the menu bar at the top of any LHC@home web page.
In the drop down list select "Account" to open your account page.
Click Tasks View
In the page that opens is a table of your current and recent tasks.
(IMHO it's not easy to tell which job you want in this list. You have to click each one's Task number and look at the "Name" or the "Date" to identify it.)
Find the job you're interested in examining and click on it's number in the first column - that's its Task number.
And the stderr output is only available for completed tasks. Error or not.

This example snippet shows the error logged at Cern's stderr by my computer when one of those functioning but empty transport vehicles arrived last week.
It shows that a server called "HTCondor" was off line. - Yes - All this for just one server off line!
...
2024-11-17 20:03:37 (14664): Guest Log: [INFO] Testing connection to HTCondor
2024-11-17 20:03:53 (14664): Guest Log: [DEBUG] Status run 1 of up to 3: 1
2024-11-17 20:04:14 (14664): Guest Log: [DEBUG] Status run 2 of up to 3: 1
2024-11-17 20:04:39 (14664): Guest Log: [DEBUG] Status run 3 of up to 3: 1
2024-11-17 20:04:39 (14664): Guest Log: [DEBUG] run 1
2024-11-17 20:04:39 (14664): Guest Log: Ncat: Version 7.50 ( https://nmap.org/ncat )
2024-11-17 20:04:39 (14664): Guest Log: Ncat: Connection timed out.
2024-11-17 20:04:39 (14664): Guest Log: run 2
2024-11-17 20:04:39 (14664): Guest Log: Ncat: Version 7.50 ( https://nmap.org/ncat )
2024-11-17 20:04:39 (14664): Guest Log: Ncat: Connection timed out.
2024-11-17 20:04:39 (14664): Guest Log: run 3
2024-11-17 20:04:39 (14664): Guest Log: Ncat: Version 7.50 ( https://nmap.org/ncat )
2024-11-17 20:04:39 (14664): Guest Log: NCAT DEBUG: Using system default trusted CA certificates and those in /usr/share/ncat/ca-bundle.crt.
2024-11-17 20:04:39 (14664): Guest Log: NCAT DEBUG: Unable to load trusted CA certificates from /usr/share/ncat/ca-bundle.crt: error:02001002:system library:fopen:No such file or directory
2024-11-17 20:04:39 (14664): Guest Log: libnsock nsi_new2(): nsi_new (IOD #1)
2024-11-17 20:04:39 (14664): Guest Log: libnsock nsock_connect_tcp(): TCP connection requested to 137.138.156.85:9618 (IOD #1) EID 8
2024-11-17 20:04:39 (14664): Guest Log: libnsock nsock_trace_handler_callback(): Callback: CONNECT TIMEOUT for EID 8 [137.138.156.85:9618]
2024-11-17 20:04:39 (14664): Guest Log: Ncat: Connection timed out.
2024-11-17 20:04:39 (14664): Guest Log: [ERROR] Could not connect to vocms0840.cern.ch on port 9618
2024-11-17 20:04:39 (14664): Guest Log: [INFO] Testing connection to WMAgent
2024-11-17 20:04:39 (14664): Guest Log: [INFO] Testing connection to EOSCMS
2024-11-17 20:04:40 (14664): Guest Log: [INFO] Testing connection to CMS-Factory
2024-11-17 20:04:40 (14664): Guest Log: [INFO] Testing connection to CMS-Frontier
2024-11-17 20:04:40 (14664): Guest Log: [INFO] Testing connection to Frontier
2024-11-17 20:04:40 (14664): Guest Log: [DEBUG] Check your firewall and your network load
2024-11-17 20:04:40 (14664): Guest Log: [ERROR] Could not connect to all required network services
...

So it's just a matter of time before we see a completion of the maintenance upgrades.
It's a big old system y'all. Patience needed by all.
23) Message boards : CMS Application : How do I limit the number of concurrent CMS VM's?
Message 51227
Posted 30 Nov 2024 by ProfileGuy
There is a similar post here -

More reasons to use an app_config.xml with your project.

with other reasons for using an app_config.xml described.
24) Message boards : CMS Application : Problems connecting to servers?
Message 51214
Posted 28 Nov 2024 by ProfileGuy
They must be dry-running the servers.
It would be nice if they kept that local.
25) Message boards : CMS Application : Problems connecting to servers?
Message 51212
Posted 28 Nov 2024 by ProfileGuy
OK thanks, Harri.
These "empty" CMS jobs still use 4 of my CPUs...
I'll stop pulling CMS tasks until work is available.
26) Message boards : CMS Application : Problems connecting to servers?
Message 51211
Posted 28 Nov 2024 by ProfileGuy
Yes. Thanks.
As noted - that failed utterly! Yikes!
27) Message boards : CMS Application : Problems connecting to servers?
Message 51209
Posted 28 Nov 2024 by ProfileGuy
Because CMS appears to be using only 1 CPU I tried adjusting my app_config.xml to use 1 CPU for CMS jobs.
They all failed!
CMS multithread jobs need 4 CPUs (minimum).
It "looked" like it was working... But all have since failed with this logged in stderr -
2024-11-28 11:15:41 (18379): Guest Log: [INFO] CMS application starting. Check log files.
2024-11-28 11:27:55 (18379): Guest Log: [ERROR] VM expects at least 4 CPUs but reports only 1.
Changed it back to 4 CPUs & threads. All OK now!
But this is a waste of compute units (CPUs). Three CPUs are doing exactly nothing.
28) Message boards : CMS Application : Problems connecting to servers?
Message 51207
Posted 28 Nov 2024 by ProfileGuy
The CMS multithread tasks are using just 1 CPU at the moment.
The sysfolk must be sending out test jobs.

<stderr_txt>
2024-11-28 10:14:06 (15635): vboxwrapper version 26207
...
2024-11-28 10:15:35 (15635): Guest Log: [INFO] CMS application starting. Check log files.
2024-11-28 10:42:29 (15635): Guest Log: [INFO] glidein exited with return value 0.
2024-11-28 10:42:30 (15635): Guest Log: [INFO] Shutting Down.
2024-11-28 10:42:30 (15635): VM Completion File Detected.
2024-11-28 10:42:30 (15635): VM Completion Message: glidein exited with return value 0.
...

</stderr_txt>
The jobs are all completing and being verified successfully.
But note the timestamps - the jobs take a minute and a half to initialise, then 28 minutes to complete
29) Message boards : CMS Application : Problems connecting to servers?
Message 51200
Posted 27 Nov 2024 by ProfileGuy
From my All tasks web page -

Task          Work unit     Computer      Sent                          Time reported   Status                  Run     CPU     Application
                                                                        or deadline                             time    time
417388207     228404289     10860321      27 Nov 2024, 9:04:47 UTC      9:24:36 UTC    Error while computing    126.59  21.70   CMS Simulation v70.30 (vbox64_mt_mcore_cms)
                                                                                                                                x86_64-pc-linux-gnu

this is the error I'm seeing with CMS -

417388207 stderr output from above task.

The following error occurs towards the end of the above stderr output:
...
2024-11-27 09:22:20 (35388): Guest Log: [INFO] Testing connection to EOSCMS
2024-11-27 09:22:20 (35388): Guest Log: [DEBUG] Status run 1 of up to 3: 1
2024-11-27 09:22:26 (35388): Guest Log: [DEBUG] Status run 2 of up to 3: 1
2024-11-27 09:22:40 (35388): Guest Log: [DEBUG] Status run 3 of up to 3: 1
2024-11-27 09:22:40 (35388): Guest Log: [DEBUG] run 1
2024-11-27 09:22:40 (35388): Guest Log: Ncat: Version 7.50 ( https://nmap.org/ncat )
2024-11-27 09:22:40 (35388): Guest Log: Ncat: Connection to 128.142.160.140 failed: Connection refused.
2024-11-27 09:22:40 (35388): Guest Log: Ncat: Trying next address...
2024-11-27 09:22:40 (35388): Guest Log: Ncat: Connection refused.
2024-11-27 09:22:40 (35388): Guest Log: run 2
2024-11-27 09:22:40 (35388): Guest Log: Ncat: Version 7.50 ( https://nmap.org/ncat )
2024-11-27 09:22:40 (35388): Guest Log: Ncat: Connection to 128.142.160.140 failed: Connection refused.
2024-11-27 09:22:40 (35388): Guest Log: Ncat: Trying next address...
2024-11-27 09:22:40 (35388): Guest Log: Ncat: Connection refused.
2024-11-27 09:22:40 (35388): Guest Log: run 3
2024-11-27 09:22:40 (35388): Guest Log: Ncat: Version 7.50 ( https://nmap.org/ncat )
2024-11-27 09:22:40 (35388): Guest Log: NCAT DEBUG: Using system default trusted CA certificates and those in /usr/share/ncat/ca-bundle.crt.
2024-11-27 09:22:40 (35388): Guest Log: NCAT DEBUG: Unable to load trusted CA certificates from /usr/share/ncat/ca-bundle.crt: error:02001002:system library:fopen:No such file or directory
2024-11-27 09:22:40 (35388): Guest Log: libnsock nsi_new2(): nsi_new (IOD #1)
2024-11-27 09:22:40 (35388): Guest Log: libnsock nsock_connect_tcp(): TCP connection requested to 128.142.160.140:1094 (IOD #1) EID 8
2024-11-27 09:22:40 (35388): Guest Log: libnsock nsock_trace_handler_callback(): Callback: CONNECT ERROR [Connection refused (111)] for EID 8 [128.142.160.140:1094]
2024-11-27 09:22:40 (35388): Guest Log: Ncat: Connection to 128.142.160.140 failed: Connection refused.
2024-11-27 09:22:40 (35388): Guest Log: Ncat: Trying next address...
2024-11-27 09:22:40 (35388): Guest Log: libnsock nsock_connect_tcp(): TCP connection requested to 2001:1458:301:17::100:9:1094 (IOD #1) EID 16
2024-11-27 09:22:40 (35388): Guest Log: libnsock nsock_trace_handler_callback(): Callback: CONNECT ERROR [Connection refused (111)] for EID 16 [2001:1458:301:17::100:9:1094]
2024-11-27 09:22:40 (35388): Guest Log: Ncat: Connection refused.
2024-11-27 09:22:40 (35388): Guest Log: [ERROR] Could not connect to eoscms-ns-ip563.cern.ch on port 1094
2024-11-27 09:22:40 (35388): Guest Log: [INFO] Testing connection to CMS-Factory
2024-11-27 09:22:41 (35388): Guest Log: [INFO] Testing connection to CMS-Frontier
2024-11-27 09:22:41 (35388): Guest Log: [INFO] Testing connection to Frontier
2024-11-27 09:22:41 (35388): Guest Log: [DEBUG] Check your firewall and your network load
2024-11-27 09:22:41 (35388): Guest Log: [ERROR] Could not connect to all required network services
...
30) Message boards : CMS Application : Problems connecting to servers?
Message 51182
Posted 26 Nov 2024 by ProfileGuy
Yes, Theory jobs run with one thread using exactly one CPU core.
Above, in the <app>..</app> section, the line
<max_concurrent>4</max_concurrent>
limits the number of Theory jobs that run at the same time - that, as you note, run in the form: 1 Theory work unit will use 1 CPU core.
So it is possible to have more than one Theory job running on your BOINC client at a time - well, if you have a multi-core CPU. Now with all the 'week long' Theory jobs that are being sent out now, it may be useful to allow other job types to run, even from other projects, at the same time as the long Theory jobs.
This
app_config.xml -
<app_config>

  <app>  
    <name>Theory</name>
    <max_concurrent>4</max_concurrent>
  </app>

</app_config>
limits the number of Theory jobs that run at the same time. And with one job per one CPU core, a maximum of 4 CPU cores will ever be used for Theory jobs with this app_config.xml in effect.
If you have any more cores available they will run other non-Theory job types. That's useful for running, if they're available, some different types of LHC@home tasks concurrently (at the same time) or to keep another projects tasks running while your PC concurrently crunches through some long LHC@home Theory tasks.

There are as many ways to use app_config.xml as there are different computers.
These examples are running well on my 8 core system.

Before anything else - it's recommended to leave a couple of CPU cores free to run all the background OS processes. A reliable way to do this is to use the BOINC Manager's "Options -> Computing preferences" to limit the number of CPUs that BOINC uses for its number crunching.
Use at most [75] % of the CPUs
works well with my 8-core CPU.

Multi-threaded tasks
NOTE that there are multi-threaded apps out there and they use completely different app_config.xml elements for limiting, if you want to, the number of CPU cores that a multi-thread task will use.
For instance two of the other job types offered by the LHC@home project are ATLAS and CMS. These job types are multi-threaded meaning that they use many of your CPUs per job. So just one of these multi-threaded jobs will try to use all the CPUs available to your BOINC client - leaving no room for anything else to run. This may suit your needs. In my 8-core multi project set-up I prefer a workload that's balanced across different app types and different projects. For me, that means limiting the number of any one particular type of app running at a time and limiting the total number of all apps any project can run concurrently. But if you're only going to run just one project you can leave out those last limits.
So, in the following app_config.xml the CMS tasks are limited with the <max_concurrent>1</max_concurrent> xml element to running 1 CMS job at a time. The <avg_ncpus>4</avg_ncpus> and <cmdline>--nthreads 4</cmdline> elements limit the number of CPU cores & threads it uses to 4. That allows other job types to run on your remaining free CPUs. Groovy.

app_config.xml
<app_config>

  <app>
    <name>CMS</name>
    <max_concurrent>1</max_concurrent>
  </app>
  <app_version>
    <app_name>CMS</app_name>
    <plan_class>vbox64_mt_mcore_cms</plan_class>
    <avg_ncpus>4</avg_ncpus>
    <cmdline>--nthreads 4</cmdline>
  </app_version>

  <app>
    <name>Theory</name>
    <max_concurrent>4</max_concurrent>
  </app>

</app_config>


You can do the same with ATLAS, like this -
app_config.xml
<app_config>

  <project_max_concurrent>4</project_max_concurrent>

  <!-- limiting the concurrent apps run by any one project above allows apps from other projects to run as well -->
  <!-- as long as they have work available. Optional -->

  <app>
    <name>ATLAS</name>
    <max_concurrent>1</max_concurrent>
  </app>
  <app_version>
    <app_name>ATLAS</app_name>
    <plan_class>vbox64_mt_mcore_atlas</plan_class>
    <avg_ncpus>4</avg_ncpus>
    <cmdline>--nthreads 4</cmdline>
  </app_version>

  <app>
    <name>CMS</name>
    <max_concurrent>1</max_concurrent>
  </app>
  <app_version>
    <app_name>CMS</app_name>
    <plan_class>vbox64_mt_mcore_cms</plan_class>
    <avg_ncpus>4</avg_ncpus>
    <cmdline>--nthreads 4</cmdline>
  </app_version>

<!-- This Theory app section is now superfluous because of line 3 -->
<!-- But if you only have LHC work, or have deleted line 3 -->
<!-- then you may find this section useful -->
  <app>  
    <name>Theory</name>
    <max_concurrent>4</max_concurrent>
  </app>
  <app_version>
    <app_name>Theory</app_name>
    <plan_class>vbox64_theory</plan_class>
    <!-- nothing to do here! -->
  </app_version>

</app_config>



Instructions for writing app_config.xml files are here.

If you want to write an app_config.xml for your project (or one each for more than one project!) -
to use it, you put it in its particular project folder, here:

Windows:
C:\ProgramData\BOINC\projects\<your project>\app_config.xml

Linux:
/var/lib/boinc/projects/<your project>/app_config.xml

Then start your BOINC client.
Or, if it's running already, click "Options -> Read config files" in your BOINC Manager.
And it takes effect.
31) Questions and Answers : Windows : Windows Theory Simulation v300.30 deadline miss
Message 51129
Posted 24 Nov 2024 by ProfileGuy
Also on the subject of very long running Theory tasks -
This gonna be long - https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6251
And (currently) -
CMS and Atlas have problems, but I'm getting a few Theory jobs that seem to be running.

Have a little patience. They'll sort it out eventually.
32) Message boards : Theory Application : This gonna be long
Message 51128
Posted 24 Nov 2024 by ProfileGuy
Perfect. Thanks. ;-)
33) Message boards : CMS Application : Problems connecting to servers?
Message 51127
Posted 24 Nov 2024 by ProfileGuy
It's vexing.
Long running tasks and fair play - https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6240
This gonna be long - https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=6251

If you get nothing but long Theory jobs using all your CPUs you could limit the number with
app_config.xml
<app_config>

  <app>  
    <name>Theory</name>
    <max_concurrent>4</max_concurrent>
  </app>

</app_config>
and thus allow other jobs to run at the same time. (4 is just an example. It depends on how many of your CPUs you are willing to give to Theory tasks.)

Create the above app_config.xml file and drop it in your LHC@home project data folder, here:
Windows:
C:\ProgramData\BOINC\projects\<your project>\app_config.xml
Linux:
/var/lib/boinc/projects/<your project>/app_config.xml

Instructions for writing app_config.xml files are here
34) Message boards : Theory Application : This gonna be long
Message 51125
Posted 24 Nov 2024 by ProfileGuy
Thank you. There it is!

61,600 of 100,000 events processed
5 days so far.
Started:.....19 Nov 2024, 14:21:01 UTC
Deadline:..30 Nov 2024, 14:21:01 UTC

(This is my only PC, so BOINC only runs for ~75% of the time.)

What happens to a Theory task? Do they just count to 100,000 [default] and stop?
35) Message boards : Theory Application : This gonna be long
Message 51116
Posted 23 Nov 2024 by ProfileGuy
Thanks for the responses.
I won't be able to implement this method -
I don't have
/var/lib/boinc/slots/*/cernvm
I do have
/var/lib/boinc/slots/*/shared
for the Theory task. But there's no runRivet.log or PanDA_Pilot anywhere. I searched for *event* and got nothing.
I do have a vague memory of being able to find the events processed in some xml file, without installing any extra software.
36) Message boards : Theory Application : This gonna be long
Message 51110
Posted 22 Nov 2024 by ProfileGuy
Sorry about the power failure.
Lem - please remind me - where do I find the "events" processed?
Thanks
37) Message boards : CMS Application : How do I limit the number of concurrent CMS VM's?
Message 51108
Posted 21 Nov 2024 by ProfileGuy
Extra app_config.xml info.

Where to put app_config.xml
It goes in its particular project folder, here:

Windows:
C:\ProgramData\BOINC\projects\<your project>\app_config.xml
Linux:
/var/lib/boinc/projects/<your project>/app_config.xml

Also -
Limit how much work you download.
It's important to note, obviously, the amount of work you choose to download should not exceed the ability of your PC to complete it in time, before its deadline.
Because if you do choose to get too many days worth of work your BOINC client (on your PC) will 'see' that it won't complete it in time. But it will try - it will "override" all your Preferences and app_config.xml settings and assign ALL(!) of your CPUs to the downloaded work units. And that clogs up your PC a bit. Slows it down. Why it lets you download too much I don't know. But that's how it works...

To manage this I've set the 'days of work' downloaded to a fractional value of 0.7 like this:

"Options -> Computing preferences..." then the Computing tab, in the General section:

Store at least [0.7] days and up to an additional [0] days of work

It's OK. It never runs out because BOINC checks about every hour or so and gets work from the project servers as often as it needs it. This way it just won't download "too much".

You may set yours differently. My modest 4GHz desktop PC has 6 of its 8 CPU cores enabled for BOINC, plus 1 GPU. And 0.7 days works well.

Instructions for writing app_config.xml files are here and, of course, further help can be gleaned if you ask on the forums.
38) Message boards : CMS Application : How do I limit the number of concurrent CMS VM's?
Message 51104
Posted 20 Nov 2024 by ProfileGuy
I have an 8 logical core CPU (i7-4790K) limited by BOINC to use just 6 of the CPUs (75%).
To limit the number of CPUs, use the BOINC Managers "Options -> Computing preferences".
It's recommended to limit the number of CPU cores used by BOINC so that essential Operating System background processes [and other programs] can run smoothly on those spare cores without interfering.

Aside: So thank you captain jack. And after some very helpful tutoring on the subject, I was able to tailor an app_config.xml file for each of my BOINC projects.

OK
Further limiting the concurrent apps run by any one project allows apps from other projects to run concurrently on the remaining CPUs as well.
Also, LHC@home has some multithreaded apps. By default, each one of those will try to use all available CPUs by itself.
So the following app_config.xml sets LHC@home up to use at most 4 of the 6 CPUs I've allowed for BOINC.
Like this:

Only 1 concurrent ATLAS task (1 Multithreaded task - Uses 4 CPUs)
or
Only 1 concurrent CMS task (1 Multithreaded task - Uses 4 CPUs)
or
Up to 4 concurrent Theory tasks. (Not Multithreaded - 1 CPU per task)

And the spare CPUs will run apps from other projects, if available.

app_config.xml
<app_config>  <!-- SuSE Linux Tumbleweed - no native apps -->

  <project_max_concurrent>4</project_max_concurrent>
  <!-- limiting the concurrent apps run by any one project allows apps from other projects to run as well -->
  <!-- as long as they have work available. Optional -->

  <app>
    <name>ATLAS</name>
    <max_concurrent>1</max_concurrent>
  </app>
  <app_version>  <!-- VM -->
    <app_name>ATLAS</app_name>
    <plan_class>vbox64_mt_mcore_atlas</plan_class>
    <avg_ncpus>4</avg_ncpus>
    <cmdline>--nthreads 4</cmdline>
  </app_version>

  <app>
    <name>CMS</name>  <!-- has VM tasks only -->
    <max_concurrent>1</max_concurrent>
  </app>
  <app_version>  <!-- VM -->
    <app_name>CMS</app_name>
    <plan_class>vbox64_mt_mcore_cms</plan_class>
    <avg_ncpus>4</avg_ncpus>
    <cmdline>--nthreads 4</cmdline>
  </app_version>

<!-- This 'app' section is superfluous because of line 3 -->
<!-- included for illustrative purposes only -->
  <app>  
    <name>Theory</name>
    <max_concurrent>4</max_concurrent>
  </app>
  <app_version>  <!-- VM -->
    <app_name>Theory</app_name>
    <plan_class>vbox64_theory</plan_class>
    <!-- nothing to do here either -->
  </app_version>

</app_config>

You may need to refer to the LHC@home Applications page if you're going to edit the app_config.xml file for yourself.
The "Version" column contains, in brackets, the "plan class" used in the app_config.xml file to identify the specific app you'll be sent automatically by BOINC depending on which OS and CPU you have.

I'm not running native apps because they take too long to run on my modest desktop PC.

Thanks to all.
39) Message boards : Theory Application : This gonna be long
Message 51102
Posted 20 Nov 2024 by ProfileGuy
Yep.

They're ironing out a few wrinkles at the moment.
The CMS task problem has apparently been fixed. But I've got to wait, hoping, for a "10 Day" Theory task to finish before I get to see any and find out!

Here's my list of failure -
https://lhcathome.cern.ch/lhcathome/results.php?userid=95350
LOL
40) Message boards : CMS Application : Problems connecting to servers?
Message 51101
Posted 20 Nov 2024 by ProfileGuy
Mine is still not working -

All Tasks

Checking the times on that... Ah.
OK There are, as I write this, reports it's working now...
[Preferences set to send me CMS...]

Frustrating.
I'll probably have to wait because there's a "10 Day" Theory task running at the moment! (See this post)
Yesterday I set the Project Preferences as below to stop sending me CMS tasks , but I still got them:
Run only the selected applications      SixTrack: yes
                                        sixtracktest: yes
                                        CMS Simulation: no
                                        Theory Simulation: yes
                                        ATLAS Simulation: yes
That's probably because of -
If no work for selected applications is available, accept work from other applications?        yes
Fun with a dash of very dry irony.
Previous 20 · Next 20


©2026 CERN