1) Message boards : Number crunching : Android x86 app (Message 37909)
Posted 2 Feb 2019 by marmot
Post:
Hi. I have an ASUS ZenFone Zoom smartphone (Intel Z3580 CPU quad-core 2.3GHz) but at the moment I can only help SETI research. Can you tell me when you will add an app for x86 Android devices?


We can run on android devices, see sixtracktest here: https://lhcathome.cern.ch/lhcathome/apps.php

Currently there are two issues:

1: The ram usage of sixtrack is relatively high - almost all internal arrays are allocated with a fixed size at compile time, and set to a "worst case" capacity. A lot of android devices just do not have sufficient spare ram. Work is in progress to re-structure the internals to reduce the memory usage and only allocate what is needed.

2: The security model in the current android release has restricted the usage of some syscalls, and we seem to use some of the restricted ones. This needs to be changed.

Once both of these issues have been resolved, we can push the android version out to production.



Can we get an update on the ARM Sixtrack application?

My tablet has been waiting since I got it...
2) Message boards : Sixtrack Application : multi threading six track possible in any way? (Message 37908)
Posted 2 Feb 2019 by marmot
Post:
Hi,

SixTrack is not a multi threaded application, and due to the nature of the simulations we run on SixTrack, there is no real need for parallelising it.


Thanks for the response and your work on the project.
3) Message boards : Sixtrack Application : multi threading six track possible in any way? (Message 37884)
Posted 1 Feb 2019 by marmot
Post:
Hello,
Why not to use LHC@HOME preferences ?
You can select amount of CPU !
If you want to run 4CPU on one host and 8CPU on a other host,
add a separate computer location with preferences you want (CPU/GPU/application/ressources).


That's running 8 single core WU at once; not running one 8 thread WU and not what I'm looking for.
4) Message boards : Sixtrack Application : New WU Notification? (Message 37867)
Posted 31 Jan 2019 by marmot
Post:
Simply set the resource share at the project's (web-)preferences page according to your needs.

Example:
Project A (LHC@home): 1000
Project B (other CPU project): 1
Project C (other CPU project): 1



This just doesn't work much of the time.

Some projects will aggressively flood your cache with very short deadline WU's, keep them coming, and Sixtrack will give the message

"Not highest priority project" because all the other projects WU are up against their deviously short deadlines.

Resource share is about useless management feature and I came up with other methods.
5) Message boards : Sixtrack Application : multi threading six track possible in any way? (Message 37866)
Posted 31 Jan 2019 by marmot
Post:
Can we assign an app_config, for 8 cores, such as:

<app>
<name>sixtrack</name>
<max_concurrent>2</max_concurrent>
<fraction_done_exact/>
</app>
<app_version>
<app_name>sixtrack</app_name>
<avg_ncpus>4</avg_ncpus>
<cmdline>-t4</cmdline>
</app_version>


If not, how much trouble is the implementation of multi-threading the WU?
6) Message boards : ATLAS application : ATLAS native - Configure CVMFS to work with openhtc.io (Message 37784)
Posted 19 Jan 2019 by marmot
Post:
From the Cloudflare http://OpenHTC.io website:

"ask computing sites with many clients to supply their own caching proxy servers."
.

Any idea what number they assign to 'many'?

8, maybe 15 but certainly not 30 per individual IP address?

Is it measured per project?
Would they start objecting once BOINC volunteers break a 1000 clients all doing ATLAS even though each individual IP only has 4 to 6 clients?
7) Message boards : Number crunching : Local control of which subprojects run`2 (Message 37597)
Posted 13 Dec 2018 by marmot
Post:


<avg_ncpus>1.0</avg_ncpus>


I went through the same discussion a year orso back. Not in the pool but can empathize with your dilemma.

It's untested but if you set <avg_ncpus></avg_ncpus> to more cores than your BOINC client has available (or greater than 16 cores), LHC@home server should stop sending WU's which have impossible requirements to fulfill.
8) Message boards : News : Server upgrade (Message 37596)
Posted 13 Dec 2018 by marmot
Post:
"General terms-of-use for this BOINC project. "

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (From default settings)

What is the function of this checkbox and what are the effects of not putting the X in this box?
9) Message boards : Cafe LHC : Milestones (Message 35097)
Posted 26 Apr 2018 by marmot
Post:

It has always been a "credit" contest for most people doing this since way back before Boinc with seti classic.
.as I consider silly things on websites that basically were made to hook people so the company owner can see how many billions they can make before they are 35 years old (hint of the day)


The credit rewards are exactly like the dopamine rewarding 'like' buttons on social media posts and BOINCers are hooked on those rewards.

Instead of getting actual currency, they get paid in dopamine and epeen.

Gaming or BOINCing, both are dopamine based addictions.
10) Message boards : ATLAS application : ATLAS multi-core (Message 34535)
Posted 4 Mar 2018 by marmot
Post:
OK, checked server Olmec and it had an ATLAS that timed out.
Had a run time of 8+ days.
Opened the VM and it was still running, responsive to key strokes and using CPU in the process manager, but it's not going to validate.
11) Message boards : ATLAS application : ATLAS multi-core (Message 34534)
Posted 4 Mar 2018 by marmot
Post:

I have seen many of those (not here at LHC) and you can check your CPU stats and it will be doing nothing.......yet they will still say they are running on the Boinc Manager.

And checking the VB log it is running and looks like it will continue as long as I let it run and just stays at 100% progress and running.
.


Are the VM's aborted or sitting in a reset state in your VBox Manager?

I haven't seen that happen in ATLAS, but it was a 2% error rate in Theory where the BOINC manager would show running 100% progress while VBox Manager would show the WU in a reset state.

ATLAS jobs are generally finishing in about 60,000 seconds but a 3 of the last 15 (20%) took 130k, 168k and 185k seconds till completion.

The one that ran 259,213 seconds didn't validate, so maybe they have set an upper limit of 3 days since that record setter some months back.
12) Message boards : Cafe LHC : ~~~Last Person To Post Wins~~~ (Message 34435)
Posted 21 Feb 2018 by marmot
Post:
Server "Indus" found a top 1000 prime number Jan 23rd and the project admin took credit for it at http://primes.utm.edu/.

How angry would you be?
13) Message boards : LHCb Application : LHCb VMs have longer runtimes (Message 34434)
Posted 21 Feb 2018 by marmot
Post:
Since yesterday evening the average runtimes of LHCb VMs are much longer than the weeks before.
Do we crunch another type of jobs or is it a result of the server works?



I switched one machine to LHCb and 50 of the 75 WU's were typically 2000-4000 seconds run time and paid about 3 credit each.
A lot of bandwidth used for such short runs.

They did two CONDOR jobs in that 2000 seconds.
For example:
Condor JobID: 24211.257
Condor JobID: 24211.462
were completed.

The 25 of 75 WU's that survived over 15,000 seconds (usually ~43,000 seconds) completed between 20 and 60 CONDOR jobs. (This was after I got the issues with the WiFi router resolved)

Are your extremely long run times WU's performing 80, 120, 150 CONDOR jobs?

Is the WU's algorithm for deciding the number of CONDOR jobs before shutting down the VM not working correctly in all environments?
14) Message boards : ATLAS application : ATLAS multi-core (Message 34433)
Posted 21 Feb 2018 by marmot
Post:
8hrs to get to 99.983% and just dragging its Atlas feet


Don't abort those. They'll almost certainly complete and verify safely. Many ATLAS will run for over 24 hours and a very few for over 2 days.

Done a lot of experimenting with configs, and all the WU core counts from 1 to 8, over the last 3 months and here are some of the upper limits of WU I've gotten:
BTW, 4 cores are optimal from my data as was reported in an earlier post on optimal core counts.

8 cores (117511.21 seconds):
158518582	76136035	2 Oct 2017, 21:59:55 UTC	6 Oct 2017, 22:13:00 UTC	Completed and validated	117511.21	601975	353.08	ATLAS Simulation v1.01 (vbox64_mt_mcore_atlas) windows_x86_64


4 cores (197182.2 seconds, high credit):
168867864	81727868	9 Dec 2017, 7:38:08 UTC	12 Dec 2017, 12:20:34 UTC	Completed and validated	197182.2	232063.5	5492.62	ATLAS Simulation v1.01 (vbox64_mt_mcore_atlas) windows_x86_64


4 cores (217693.71 seconds, low credit):
168640849	81602634	4 Dec 2017, 11:34:46 UTC	10 Dec 2017, 18:21:53 UTC	Completed and validated	217693.71	197709.2	857.05	ATLAS Simulation v1.01 (vbox64_mt_mcore_atlas) windows_x86_64


4 cores record holder (330805.33 seconds or over 3 days):
168867893	81727709	9 Dec 2017, 7:38:08 UTC	17 Dec 2017, 7:17:18 UTC	Completed and validated	330805.33	561754.2	4551.5	ATLAS Simulation v1.01 (vbox64_mt_mcore_atlas) windows_x86_64


1 core (126222.36 seconds):
158790822	76294126	8 Oct 2017, 11:36:17 UTC	12 Oct 2017, 16:44:54 UTC	Completed and validated	126222.36	126713.2	144.44	ATLAS Simulation v1.01 (vbox64_mt_mcore_atlas) windows_x86_64


2 cores (136466.87 seconds):
168748377	81677587	6 Dec 2017, 11:40:09 UTC	8 Dec 2017, 2:17:48 UTC	Completed and validated	136466.87	152899.8	3332.59	ATLAS Simulation v1.01 (vbox64_mt_mcore_atlas) windows_x86_64
15) Message boards : Number crunching : Frequency of Condor Connections (Message 34431)
Posted 21 Feb 2018 by marmot
Post:
If you have a small cluster at home, it should be possible to setup a local squid cache to reduce the external traffic. A few people have looked into this but we don't have any detailed instructions for anyone to follow. I will create a thread in Number Crunching about this.


Thankyou.

computezrmle reminded me of benefits of using a proxy server and some strategies of setting one up.
16) Message boards : Sixtrack Application : "no tasks are available for Sixtrack", over 1 million visible (Message 34233)
Posted 2 Feb 2018 by marmot
Post:
I have a very different perspective. SixTrack is a very active sub project, because it's system requirements are generally low and downloads are small. You'll find systems on Sixtrack that no other LHC project could support.

Not requiring VBOX is a major part of that. Vbox installs are troublesome still. New users are often turned away from LHC because they don't want to deal with the checklist -- and they shouldn't have to. ATLAS native App is no solution because it adds several layers of difficulty. Also, the native app does very little for users even if they are on linux. Major complaints remain download size and system requirements. My system, at least, received the same size downloads in addition to CVMFS traffic. If LHC can use a cache that's great, but doesn't help users with limited internet traffic. And I didn't see improvements in system usage or stability (pausing tasks specifically).

I fear users will be excluded if changes are made that profit a small minority.


It's all good points.The Sixtrack VM doesn't need to eliminate the native Sixtrack. It would just be available to those who desire it and would likely be always available while the feeder has trouble with the regular Sixtrack. It would also be another lower requirement option since it would have similar 630MB RAM usage as Theory.

Consider my suggestion as an option, not a replacement, and as a method to keep Sixtrack flowing during these rough patches.

I don't want to exclude low RAM or low core count users. That was me last year with one of my laptops having only 2GB RAM and my main machine was a 1090t with only 8GB RAM.
17) Message boards : ATLAS application : No tasks available (Message 34226)
Posted 1 Feb 2018 by marmot
Post:
Seemed to be in the backed up condition.

Did other things for 15 minutes came back and did a single update and got 4 ATLAS.

Set the [work_fetch_debug]1[/work_fetch_debug] state and will see what that says if this happens again.

The ATLAS tasks are very short (2,xxx sec, 29 credit) compared to any I've seen before.
18) Message boards : Number crunching : How many cores does a vbox64 LHC use? (Message 34221)
Posted 1 Feb 2018 by marmot
Post:
I cannot adjust cores as suggested by Toby as I am not the owner of the project. Grcpool is the owner. However, I will be able to return to BAM! in a few weeks as I will have a coin balance large enough to not have to use their pool.

Why is Gridcoin with so many Teams in LHCatHome?


Labor should be rewarded especially with the downward pressure on wages from automation, gig economy and AI.

The more mercenary of Gridcoin goes where they can best compete.
I'm here because my training is in physics, but I never worked as a scientist, and this science is closest to my heart.
19) Message boards : Number crunching : How many cores does a vbox64 LHC use? (Message 34220)
Posted 1 Feb 2018 by marmot
Post:
Search the forum for app_config.xml setup for Theory.

6GB?

You can run 8 Theory if you set the working memory to 512mb (instead of 630mb) in the app_config and still leave yourself 2GB for OS, caches and Collatz.
They swap to the drive a lot more but they succeed.

Goodluck!
20) Message boards : Sixtrack Application : "no tasks are available for Sixtrack", over 1 million visible (Message 34215)
Posted 1 Feb 2018 by marmot
Post:
Per Crystal Pellet:
"I think BOINC's feeder gets confused when there is a massive amount of SixTrack workunits in the queue as we've seen before.
Even when requesting SixTrack's, you often get the message 'No tasks available'."

https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4513&postid=33587#33587

and
"The issue seems to be related to the combined presence of short SixTrack tasks and a huge backlog of tasks for SixTrack. "


I was thinking you could bundle 20 to 100 Sixtrack in it's own VBox d/l, thus averaging the short runs with longer runs and reduce the number of Sixtrack WU's in queue by 20-100 times. You already have the VM's developed and it could solve all these feeder issues.
It's what I already do with Sixtrack on one machine.


Next 20


©2020 CERN