1) Message boards : Number crunching : LHC says no tasks available for any project (Message 43030)
Posted 11 Jul 2020 by pls
Post:
And I'm still not getting anything at all.

My computer ID is 10483643, is anyone knows what to look for.

The server says status page shows tasks ready to send. My event log just says
2020-07-11 02:01:57 | LHC@home | Sending scheduler request: Requested by project.
2020-07-11 02:01:57 | LHC@home | Requesting new tasks for CPU
2020-07-11 02:02:00 | LHC@home | Scheduler request completed: got 0 new tasks
2020-07-11 02:02:00 | LHC@home | No tasks sent
2020-07-11 02:02:00 | LHC@home | No tasks are available for SixTrack
2020-07-11 02:02:00 | LHC@home | No tasks are available for Theory Simulation
2020-07-11 02:02:00 | LHC@home | No tasks are available for ATLAS Simulation
2) Message boards : Number crunching : LHC says no tasks available for any project (Message 42983)
Posted 9 Jul 2020 by pls
Post:
I'm reactivating LHC@Home after being away for a while. I have just installed the latest BOINC and VirtualBox. and attached to the project.

However, according to the event log, there are no tasks available for any subproject. This contradicts what the web site Server Status says.

What can I do to fix this and get operating?

Thanks,
++PLS
3) Message boards : Number crunching : Local control of which subprojects run`2 (Message 37304)
Posted 11 Nov 2018 by pls
Post:
I seem to accidently have solved my problem. I set

<max_concurrent>1</max_concurrent>

and

<avg_ncpus>1.0</avg_ncpus>

I figured that if I have to run LHCb, they can only have 1 cpu.

And I haven't seen an LHCb job since.

Is LHCb not handing out 1 cpu jobs any more?
4) Message boards : Number crunching : Memory requirements for LHC applications (Message 37303)
Posted 11 Nov 2018 by pls
Post:
I think we're done. Can someone with the authority please take this last copy and post it as a pinned message?

Thanks
5) Message boards : Number crunching : Local control of which subprojects run`2 (Message 37279)
Posted 8 Nov 2018 by pls
Post:
Yeah, I have that. But while that page gives the format of the options, it doesn't really explain what they DO.

Really, I have this problem because the control that LHC provides over which subproject are run is at the wrong level. It should be at the individual computer level. Not that it can't also be at the account level, serving as the default for individual computers. But needs to be at the computer level, too.

Thanks for the information.
6) Message boards : Number crunching : Memory requirements for LHC applications (Message 37278)
Posted 8 Nov 2018 by pls
Post:
I've incorporated the changes. I'm not sure the sources are still useful, since every entry not has multiple sources, so I'll likely remove those at the last step.

=======================================

LHC application multithreaded memory requirements,
Single threads applications, VirtualBox or not, are
not mentioned here.


App: ATLAS VirtualBox64
Forumla: 3000 + 900 * nCPU, megabyte
Plan class: vbox64_mt_mcore_atlas
Command line: --memory_size_mb megabytes
OS: Linux and Windows
Source: Yeti setup guide, version 3

App: ATLAS native (not VirtualBox)
Formula 100 + 2000 * nCPU, megabytes
Plan class: native_mt
Command line: --memory_size_mb megabytes
OS: Linux only
Source: Number Cruching message 37225

App: CMS


App: LHCb VirtualBox64
Formula: 748 + 1300 * nCPU, megabytes
Plan class: vbox64_mt_mcore_lhcb
Command line: --memory_size_mb megabytes
OS: Linux and Windows
Source: LHCb Application board, message 37105

App: Theory VirtualBox64
Formula: 630 + 100 * nCPU, megabytes
Plan class: vbox64_mt_mcore
Command line: not needed, memory is computed by server
OS: Linux and Windows
Source: Number crunching, message 37193

App: Theory VirtualBox32
Formula: 256 + 64 * nCPU, megabytes
Plan class: vbox32
Command line: --memory_size_mb megabytes
OS: Linux and Windows
Source: Number crunching message 37229
7) Message boards : Number crunching : Local control of which subprojects run`2 (Message 37275)
Posted 8 Nov 2018 by pls
Post:
And so it didn't. Is there a way with app_config to have LHCb not run at all?
8) Message boards : Number crunching : Local control of which subprojects run`2 (Message 37272)
Posted 7 Nov 2018 by pls
Post:
I'm starting a new thread because the existing one is has become pretty muddled.

I want a way to locally control which LHC subprojects will be run. I cna't use the account control because I'm computing through gridcoin and have to run under the gridcoin poll account. This is not a request for gridcoind support.

I wanted to eliminate LHCb tasks, since a the moment they just sit there occupying resources without doing much useful. So added the following to my LHC app_config file

<app>
<name>LHCb</name>
<max_concurrent>0</max_concurrent>
</app>

This isn't working, I'm still getting LHCb tasks..

I know that the app_config is being read and that other items in the app_config are taking effect. Did I miss something I need here?

Thanks,
++PLS
9) Message boards : Number crunching : Memory requirements for LHC applications (Message 37258)
Posted 7 Nov 2018 by pls
Post:
Thank you all. This has become far more interesting than I thought it would be.

Here is the current file, incorporating these changes. I'm not listing the sixtrack entries. It seem the native jobs really don't need assistance in managing memory, whereas VBox requires specifying memory size when the VM is created.

======================================
LHC application multithreaded memory requirements,
Single threads applications, VirtualBox or not, are
not mentioned here.


App: ATLAS VirtualBox64
Forumla: 3000 + 900 * nCPU, megabyte
Plan class: vbox64_mt_mcore_atlas
Command line: --memory_size_mb megabytes
Source: Yeti setup guide, version 3

App: ATLAS native (not VirtualBox)
Formula 2100 + 2000 * nCPU, megabytes
Plan class: native_mt
Command line: --memory_size_mb megabytes
Source: Number Cruching message 37225

App: CMS


App: LHCb VirtualBox64
Formula: 748 + 1300 * nCPU, megabytes
Plan class: vbox64_mt_mcore_lhcb
Command line: --memory_size_mb megabytes
Source: LHCb Application board, message 37105

App: Theory VirtualBox64
Formula: 630 + 100 * nCPU, megabytes
Plan class: vbox64_mt_mcore
Command line: not needed, memory is computed by server
Source: Number crunching, message 37193

App: Theory VirtualBox32
Formula: 256 + 64 * nCPU, megabytes
Plan class: vbox32
Command line: --memory_size_mb megabytes
Source: Number crunching message 37229
10) Message boards : Number crunching : Memory requirements for LHC applications (Message 37225)
Posted 4 Nov 2018 by pls
Post:
Corrections are incorporated. I couldn't find requirements for CMS, does anyone have them?

=====================================

LHC application multithreaded memory requirements
Single threaded applications are not listed here.

App: ATLAS
Forumla: 3000 + 900 * nCPU, megabyte
Command line: --memory_size_mb megabytes
Source: Yeti setup guide, version 3

App: CMS


App: LHCb
Formula: 2048 + 1300 * nCPU, megabytes
Command line: --memory_size_mb megabytes
Source: LHCb Application board, message 37105

App: Theory
Formula: 630 + 100 * nCPU, megabytes
Command line: not needed, memory is computed by server
Source: Number crunching, message 37193
11) Message boards : Number crunching : Memory requirements for LHC applications (Message 37189)
Posted 3 Nov 2018 by pls
Post:
I'm trying to assemble an authoritative list of memory requirement formulas for the multithreaded app. The bottom of this post is what I have so far. Please let me know of any additions for corrections. When complete, perhaps someone with the appropriate authority can make this a pinned post.

Thanks,
++PLS

=======================================================================

LHC application multithreaded memory requirements
Single threaded applications are not listed here.

App: ATLAS
Forumla: 3900 + 900 * nCPU, megabyte
Command line: --memory_size_mb megabytes
Source: Yeti setup guide, version 3

App: CMS


App: LHCb
Formula: 2048 + 1300 * nCPU, megabytes
Command line: --memory_size_mb megabytes
Source: LHCb Application board, message 37105

App: Theory
Formula: 730 + 100 * nCPU, megabytes
Command line: --memory_size_mb megabytes
Source: Number crunching, message 37161
12) Message boards : Number crunching : Local control of which subprojects run` (Message 37162)
Posted 2 Nov 2018 by pls
Post:
I can't use the account setting because I am also pool mining gridcoin and therefore running under the pool account. The only mechanism I have is something local to the machine, like app_config.
13) Message boards : Number crunching : Local control of which subprojects run` (Message 37153)
Posted 1 Nov 2018 by pls
Post:
I will give it a try and report.

By the way, could someone who can post a pinned message please post one containing the formula for computing memory requirements for all of the mt apps?

Thanks
14) Message boards : Number crunching : Local control of which subprojects run` (Message 37148)
Posted 1 Nov 2018 by pls
Post:
The various subprojects of LHC have really different characteristics, and the only existing way to control which one are run is at the account level.

I'm wondering if this control is possible at the local machine level.

Can a make an app_config that will have, e.g., LHCb tasks never downloaded or run?

Thanks,
++PLS
15) Message boards : Number crunching : Work unit hold a slot but doing nothing (Message 37147)
Posted 1 Nov 2018 by pls
Post:
ATLAS already behaves like you request it (1..4).

Theory's intermediate results are very small.
As long as you don't get a bad Sherpa job it should use nearly 100% of your CPU.



Not really.,

I've had multi-cpu tasks for ATLAS and Theory running..From looking at the CPU and elapsed time, and checking on CPU used, both go long periods using zero CPU. I think both are holding resources while waiting for tasks to become available. Actually, holding 4 or 6 slots while doing nothing but waiting.

My whole point was that this is not an efficient way to work. Let BOINC download the files, And don't spin up a 6 CPU VM unless at least 6 tasks are downloaded and waiting. And when those tasks are done, leave politely.

++PLS
16) Message boards : Number crunching : Work unit hold a slot but doing nothing (Message 37075)
Posted 22 Oct 2018 by pls
Post:
I just saw the notice on Sixtrack News about lack of work. Ok. I'd like to talk about the lack of work on LHCb and the vbox projects.

Lack of work on those projects has a greater impact then lack of work on Sixtrack. This is because the vbox projects hold a BOINC slot while waiting for work, not only is the vbox project doing nothing, it's preventing BOINC from running a different project. It also holds a slot while doing the very low value tasks of uploading and downloading, which BOINC could do nicely without holding a slot.

I just looked at my BOINC processes. I am running 6 cpu slots, all occupied by an LHC vbox task, but only 2 using CPU. 3 active is probably more common, but still not very efficient.

I understand why you use vbox and Linux, and I have no complaint with that. But I wish the project were set up to work in a more typical BOINC fashion:
1. BOINC downloads input files.
2. Spin up a VM and process the input files.
3. Exit the VM.
4. BOINC uploads the results.

I'm donating my machine resources on the assumption that they are being used productively. Holding slots while doing nothing or almost nothing isn't making good use of my resources.

++PLS
17) Message boards : Number crunching : VirtualBox 5.2.18? (Message 36687)
Posted 12 Sep 2018 by pls
Post:
Thanks. I'll try it.
18) Message boards : Number crunching : VirtualBox 5.2.18? (Message 36683)
Posted 11 Sep 2018 by pls
Post:
Has the new version 5.2.18 of VitrualBox been tested? Is it OK to upgrade to it?

Thanks
19) Message boards : ATLAS application : Result upload failure (Message 33325)
Posted 14 Dec 2017 by pls
Post:
Thanks for the note.
20) Message boards : ATLAS application : Result upload failure (Message 33321)
Posted 13 Dec 2017 by pls
Post:
I have a continuing failure to upload a result from the ATLAS simulation. It ran for 32 hours, so I'd really like to get credit for it. I suspect part of the problem is that the result file is 175 MB, larger than other result files I've seen. The upload goes well until about 40% complete, then slows down. Somewhere between 50% and 66% the upload ends and restarts after a backup. Meanwhile, uploads for other tasks are going though without problem.

Here is a log extract for the upload in question:

2017-12-13 14:55:01 | | [http_xfer] [ID#123] HTTP: wrote 16384 bytes
2017-12-13 14:55:01 | | [http_xfer] [ID#123] HTTP: wrote 16384 bytes
2017-12-13 14:55:01 | | [http_xfer] [ID#123] HTTP: wrote 16384 bytes
2017-12-13 14:55:01 | | [http_xfer] [ID#123] HTTP: wrote 16207 bytes
2017-12-13 14:56:38 | LHC@home | [http] HTTP error: Failure when receiving data from the peer
2017-12-13 14:56:39 | LHC@home | [file_xfer] http op done; retval -184 (transient HTTP error)
2017-12-13 14:56:39 | LHC@home | [file_xfer] file transfer status -184 (transient HTTP error)
2017-12-13 14:56:39 | LHC@home | Temporarily failed upload of mUsKDmvKVirnSu7Ccp2YYBZmABFKDmABFKDmtYMKDmABFKDmh4sYxm_0_r276664903_ATLAS_result: transient HTTP error
2017-12-13 14:56:39 | LHC@home | Backing off 04:20:48 on upload of mUsKDmvKVirnSu7Ccp2YYBZmABFKDmABFKDmtYMKDmABFKDmh4sYxm_0_r276664903_ATLAS_result
2017-12-13 14:56:40 | | Project communication failed: attempting access to reference site
2017-12-13 14:56:40 | | [http] HTTP_OP::init_get(): https://www.google.com/

The access to google was successful.

If you need any other information, please let me know.


Next 20


©2024 CERN