21) Message boards : Sixtrack Application : New WU Notification? (Message 37867)
Posted 31 Jan 2019 by marmot
Post:
Simply set the resource share at the project's (web-)preferences page according to your needs.

Example:
Project A (LHC@home): 1000
Project B (other CPU project): 1
Project C (other CPU project): 1



This just doesn't work much of the time.

Some projects will aggressively flood your cache with very short deadline WU's, keep them coming, and Sixtrack will give the message

"Not highest priority project" because all the other projects WU are up against their deviously short deadlines.

Resource share is about useless management feature and I came up with other methods.
22) Message boards : Sixtrack Application : multi threading six track possible in any way? (Message 37866)
Posted 31 Jan 2019 by marmot
Post:
Can we assign an app_config, for 8 cores, such as:

<app>
<name>sixtrack</name>
<max_concurrent>2</max_concurrent>
<fraction_done_exact/>
</app>
<app_version>
<app_name>sixtrack</app_name>
<avg_ncpus>4</avg_ncpus>
<cmdline>-t4</cmdline>
</app_version>


If not, how much trouble is the implementation of multi-threading the WU?
23) Message boards : ATLAS application : ATLAS native - Configure CVMFS to work with openhtc.io (Message 37784)
Posted 19 Jan 2019 by marmot
Post:
From the Cloudflare http://OpenHTC.io website:

"ask computing sites with many clients to supply their own caching proxy servers."
.

Any idea what number they assign to 'many'?

8, maybe 15 but certainly not 30 per individual IP address?

Is it measured per project?
Would they start objecting once BOINC volunteers break a 1000 clients all doing ATLAS even though each individual IP only has 4 to 6 clients?
24) Message boards : Number crunching : Local control of which subprojects run`2 (Message 37597)
Posted 13 Dec 2018 by marmot
Post:


<avg_ncpus>1.0</avg_ncpus>


I went through the same discussion a year orso back. Not in the pool but can empathize with your dilemma.

It's untested but if you set <avg_ncpus></avg_ncpus> to more cores than your BOINC client has available (or greater than 16 cores), LHC@home server should stop sending WU's which have impossible requirements to fulfill.
25) Message boards : News : Server upgrade (Message 37596)
Posted 13 Dec 2018 by marmot
Post:
"General terms-of-use for this BOINC project. "

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (From default settings)

What is the function of this checkbox and what are the effects of not putting the X in this box?
26) Message boards : Cafe LHC : Milestones (Message 35097)
Posted 26 Apr 2018 by marmot
Post:

It has always been a "credit" contest for most people doing this since way back before Boinc with seti classic.
.as I consider silly things on websites that basically were made to hook people so the company owner can see how many billions they can make before they are 35 years old (hint of the day)


The credit rewards are exactly like the dopamine rewarding 'like' buttons on social media posts and BOINCers are hooked on those rewards.

Instead of getting actual currency, they get paid in dopamine and epeen.

Gaming or BOINCing, both are dopamine based addictions.
27) Message boards : ATLAS application : ATLAS multi-core (Message 34535)
Posted 4 Mar 2018 by marmot
Post:
OK, checked server Olmec and it had an ATLAS that timed out.
Had a run time of 8+ days.
Opened the VM and it was still running, responsive to key strokes and using CPU in the process manager, but it's not going to validate.
28) Message boards : ATLAS application : ATLAS multi-core (Message 34534)
Posted 4 Mar 2018 by marmot
Post:

I have seen many of those (not here at LHC) and you can check your CPU stats and it will be doing nothing.......yet they will still say they are running on the Boinc Manager.

And checking the VB log it is running and looks like it will continue as long as I let it run and just stays at 100% progress and running.
.


Are the VM's aborted or sitting in a reset state in your VBox Manager?

I haven't seen that happen in ATLAS, but it was a 2% error rate in Theory where the BOINC manager would show running 100% progress while VBox Manager would show the WU in a reset state.

ATLAS jobs are generally finishing in about 60,000 seconds but a 3 of the last 15 (20%) took 130k, 168k and 185k seconds till completion.

The one that ran 259,213 seconds didn't validate, so maybe they have set an upper limit of 3 days since that record setter some months back.
29) Message boards : Cafe LHC : ~~~Last Person To Post Wins~~~ (Message 34435)
Posted 21 Feb 2018 by marmot
Post:
Server "Indus" found a top 1000 prime number Jan 23rd and the project admin took credit for it at http://primes.utm.edu/.

How angry would you be?
30) Message boards : LHCb Application : LHCb VMs have longer runtimes (Message 34434)
Posted 21 Feb 2018 by marmot
Post:
Since yesterday evening the average runtimes of LHCb VMs are much longer than the weeks before.
Do we crunch another type of jobs or is it a result of the server works?



I switched one machine to LHCb and 50 of the 75 WU's were typically 2000-4000 seconds run time and paid about 3 credit each.
A lot of bandwidth used for such short runs.

They did two CONDOR jobs in that 2000 seconds.
For example:
Condor JobID: 24211.257
Condor JobID: 24211.462
were completed.

The 25 of 75 WU's that survived over 15,000 seconds (usually ~43,000 seconds) completed between 20 and 60 CONDOR jobs. (This was after I got the issues with the WiFi router resolved)

Are your extremely long run times WU's performing 80, 120, 150 CONDOR jobs?

Is the WU's algorithm for deciding the number of CONDOR jobs before shutting down the VM not working correctly in all environments?
31) Message boards : ATLAS application : ATLAS multi-core (Message 34433)
Posted 21 Feb 2018 by marmot
Post:
8hrs to get to 99.983% and just dragging its Atlas feet


Don't abort those. They'll almost certainly complete and verify safely. Many ATLAS will run for over 24 hours and a very few for over 2 days.

Done a lot of experimenting with configs, and all the WU core counts from 1 to 8, over the last 3 months and here are some of the upper limits of WU I've gotten:
BTW, 4 cores are optimal from my data as was reported in an earlier post on optimal core counts.

8 cores (117511.21 seconds):
158518582	76136035	2 Oct 2017, 21:59:55 UTC	6 Oct 2017, 22:13:00 UTC	Completed and validated	117511.21	601975	353.08	ATLAS Simulation v1.01 (vbox64_mt_mcore_atlas) windows_x86_64


4 cores (197182.2 seconds, high credit):
168867864	81727868	9 Dec 2017, 7:38:08 UTC	12 Dec 2017, 12:20:34 UTC	Completed and validated	197182.2	232063.5	5492.62	ATLAS Simulation v1.01 (vbox64_mt_mcore_atlas) windows_x86_64


4 cores (217693.71 seconds, low credit):
168640849	81602634	4 Dec 2017, 11:34:46 UTC	10 Dec 2017, 18:21:53 UTC	Completed and validated	217693.71	197709.2	857.05	ATLAS Simulation v1.01 (vbox64_mt_mcore_atlas) windows_x86_64


4 cores record holder (330805.33 seconds or over 3 days):
168867893	81727709	9 Dec 2017, 7:38:08 UTC	17 Dec 2017, 7:17:18 UTC	Completed and validated	330805.33	561754.2	4551.5	ATLAS Simulation v1.01 (vbox64_mt_mcore_atlas) windows_x86_64


1 core (126222.36 seconds):
158790822	76294126	8 Oct 2017, 11:36:17 UTC	12 Oct 2017, 16:44:54 UTC	Completed and validated	126222.36	126713.2	144.44	ATLAS Simulation v1.01 (vbox64_mt_mcore_atlas) windows_x86_64


2 cores (136466.87 seconds):
168748377	81677587	6 Dec 2017, 11:40:09 UTC	8 Dec 2017, 2:17:48 UTC	Completed and validated	136466.87	152899.8	3332.59	ATLAS Simulation v1.01 (vbox64_mt_mcore_atlas) windows_x86_64
32) Message boards : Number crunching : Frequency of Condor Connections (Message 34431)
Posted 21 Feb 2018 by marmot
Post:
If you have a small cluster at home, it should be possible to setup a local squid cache to reduce the external traffic. A few people have looked into this but we don't have any detailed instructions for anyone to follow. I will create a thread in Number Crunching about this.


Thankyou.

computezrmle reminded me of benefits of using a proxy server and some strategies of setting one up.
33) Message boards : Sixtrack Application : "no tasks are available for Sixtrack", over 1 million visible (Message 34233)
Posted 2 Feb 2018 by marmot
Post:
I have a very different perspective. SixTrack is a very active sub project, because it's system requirements are generally low and downloads are small. You'll find systems on Sixtrack that no other LHC project could support.

Not requiring VBOX is a major part of that. Vbox installs are troublesome still. New users are often turned away from LHC because they don't want to deal with the checklist -- and they shouldn't have to. ATLAS native App is no solution because it adds several layers of difficulty. Also, the native app does very little for users even if they are on linux. Major complaints remain download size and system requirements. My system, at least, received the same size downloads in addition to CVMFS traffic. If LHC can use a cache that's great, but doesn't help users with limited internet traffic. And I didn't see improvements in system usage or stability (pausing tasks specifically).

I fear users will be excluded if changes are made that profit a small minority.


It's all good points.The Sixtrack VM doesn't need to eliminate the native Sixtrack. It would just be available to those who desire it and would likely be always available while the feeder has trouble with the regular Sixtrack. It would also be another lower requirement option since it would have similar 630MB RAM usage as Theory.

Consider my suggestion as an option, not a replacement, and as a method to keep Sixtrack flowing during these rough patches.

I don't want to exclude low RAM or low core count users. That was me last year with one of my laptops having only 2GB RAM and my main machine was a 1090t with only 8GB RAM.
34) Message boards : ATLAS application : No tasks available (Message 34226)
Posted 1 Feb 2018 by marmot
Post:
Seemed to be in the backed up condition.

Did other things for 15 minutes came back and did a single update and got 4 ATLAS.

Set the [work_fetch_debug]1[/work_fetch_debug] state and will see what that says if this happens again.

The ATLAS tasks are very short (2,xxx sec, 29 credit) compared to any I've seen before.
35) Message boards : Number crunching : How many cores does a vbox64 LHC use? (Message 34221)
Posted 1 Feb 2018 by marmot
Post:
I cannot adjust cores as suggested by Toby as I am not the owner of the project. Grcpool is the owner. However, I will be able to return to BAM! in a few weeks as I will have a coin balance large enough to not have to use their pool.

Why is Gridcoin with so many Teams in LHCatHome?


Labor should be rewarded especially with the downward pressure on wages from automation, gig economy and AI.

The more mercenary of Gridcoin goes where they can best compete.
I'm here because my training is in physics, but I never worked as a scientist, and this science is closest to my heart.
36) Message boards : Number crunching : How many cores does a vbox64 LHC use? (Message 34220)
Posted 1 Feb 2018 by marmot
Post:
Search the forum for app_config.xml setup for Theory.

6GB?

You can run 8 Theory if you set the working memory to 512mb (instead of 630mb) in the app_config and still leave yourself 2GB for OS, caches and Collatz.
They swap to the drive a lot more but they succeed.

Goodluck!
37) Message boards : Sixtrack Application : "no tasks are available for Sixtrack", over 1 million visible (Message 34215)
Posted 1 Feb 2018 by marmot
Post:
Per Crystal Pellet:
"I think BOINC's feeder gets confused when there is a massive amount of SixTrack workunits in the queue as we've seen before.
Even when requesting SixTrack's, you often get the message 'No tasks available'."

https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4513&postid=33587#33587

and
"The issue seems to be related to the combined presence of short SixTrack tasks and a huge backlog of tasks for SixTrack. "


I was thinking you could bundle 20 to 100 Sixtrack in it's own VBox d/l, thus averaging the short runs with longer runs and reduce the number of Sixtrack WU's in queue by 20-100 times. You already have the VM's developed and it could solve all these feeder issues.
It's what I already do with Sixtrack on one machine.
38) Message boards : ATLAS application : No tasks available (Message 34213)
Posted 1 Feb 2018 by marmot
Post:
Gave up on Sixtrack and tried coming back to ATLAS.. *sigh*.

Managed to get 4 to d/l on one machine but the other machine stubbornly continues to get the no tasks available error.
39) Message boards : Sixtrack Application : "no tasks are available for Sixtrack", over 1 million visible (Message 34212)
Posted 1 Feb 2018 by marmot
Post:


At the same time, I suggest also to allow for other projects under the LHC@Home umbrella which do not need VMs - eg ATLAS sends native Linux application, for instance. As a volunteer, this allows me not to waist (much) CPU time.


Sixtrack is the only project that sends native to Windows machines, right?
The only WU I've never tried is Alice.
40) Message boards : Number crunching : How many cores does a vbox64 LHC use? (Message 34211)
Posted 1 Feb 2018 by marmot
Post:
CMS needs 2048mb RAM each, Theory 630mb and ATLAS 3420mb for single cores.

I've seen that situation when not enough RAM was seen by BOINC.

Check to see how much RAM you are allotting to BOINC while in use. Also, make sure you have set BOINC to use all cores. I sometimes set it to 50% while watching vids or gaming and forget to go back to 100% later.

Your LHC should be using 4.7GB with those 3 running.
You can check their properties and see how much working set they want to claim.


Previous 20 · Next 20


©2024 CERN