61) Message boards : News : test of SixTrack 5.00.00 (Message 35867)
Posted 12 Jul 2018 by gyllic
Post:
Nice!

A quetion just for information:
Will there be an application for Linux on ARMv7 32 bit (e.g. Banana Pi)? I only see an Android version for ARM64-v8a and a Linux one for 64 bit ARM.

I tried to compile the -dev branch (from github) a couple of days ago on my Banana Pi, but it could not compile the roundctl. Unfortunately I dont have the time to look into the code or investigate the problem if it is on my side.
Is there a way to cross-compile?
62) Message boards : Theory Application : New Version 263.60 (Message 35699)
Posted 29 Jun 2018 by gyllic
Post:
in the log it writes:

2018-06-29 08:17:46 (5244): Guest Log: [DEBUG] Detected squid proxy http://192.168.1.2:3128

but then:

2018-06-29 08:18:52 (5244): Guest Log: VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE
2018-06-29 08:18:52 (5244): Guest Log: 2.4.4.0 3582 1 25760 6531 3 1 183730 10240000 2 65024 0 15 100 0 0 http://cvmfs-stratum-one.cern.ch/cvmfs/grid.cern.ch http://128.142.168.202:3125 1

https://lhcathome.cern.ch/lhcathome/result.php?resultid=199212943
63) Message boards : Number crunching : Downloads have stalled (Message 35697)
Posted 29 Jun 2018 by gyllic
Post:
additionally, ATLAS result upload is still not working properly:

29.06.2018 08:28:40 | LHC@home | Starting task evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0
29.06.2018 10:47:45 | LHC@home | Computation for task evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0 finished
29.06.2018 10:47:47 | LHC@home | Started upload of evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0_r1830430149_ATLAS_result
29.06.2018 10:50:04 | LHC@home | Backing off 00:03:16 on upload of evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0_r1830430149_ATLAS_result
29.06.2018 10:53:22 | LHC@home | Started upload of evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0_r1830430149_ATLAS_result
29.06.2018 11:02:01 | LHC@home | Backing off 00:07:07 on upload of evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0_r1830430149_ATLAS_result
29.06.2018 11:09:09 | LHC@home | Started upload of evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0_r1830430149_ATLAS_result
29.06.2018 11:11:23 | LHC@home | Backing off 00:12:22 on upload of evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0_r1830430149_ATLAS_result
29.06.2018 11:23:46 | LHC@home | Started upload of evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0_r1830430149_ATLAS_result
29.06.2018 11:26:35 | LHC@home | Backing off 00:19:15 on upload of evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0_r1830430149_ATLAS_result
29.06.2018 11:47:19 | LHC@home | Started upload of evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0_r1830430149_ATLAS_result
29.06.2018 11:49:47 | LHC@home | Backing off 00:40:19 on upload of evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0_r1830430149_ATLAS_result


maybe the problem lies in the connection from inside CERN to outside CERN since you (the people inside CERN) dont see problems and we (outside CERN) experience problems.
64) Message boards : ATLAS application : Unable to upload an Atlas task (Message 35696)
Posted 29 Jun 2018 by gyllic
Post:
Is it still bad today? I don't see any problems on my hosts but they are all inside CERN...

20-Jun-2018 10:03:10 [LHC@home] Started download of J3aMDmBdkpsnlyackoJh5iwnABFKDmABFKDmdqHODmABFKDmCjQ5Sm_EVNT.14154105._000492.pool.root.1
20-Jun-2018 10:03:15 [LHC@home] Finished download of J3aMDmBdkpsnlyackoJh5iwnABFKDmABFKDmdqHODmABFKDmCjQ5Sm_EVNT.14154105._000492.pool.root.1
yes, ATLAS result upload is still not working properly:

29.06.2018 08:28:40 | LHC@home | Starting task evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0
29.06.2018 10:47:45 | LHC@home | Computation for task evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0 finished
29.06.2018 10:47:47 | LHC@home | Started upload of evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0_r1830430149_ATLAS_result
29.06.2018 10:50:04 | LHC@home | Backing off 00:03:16 on upload of evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0_r1830430149_ATLAS_result
29.06.2018 10:53:22 | LHC@home | Started upload of evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0_r1830430149_ATLAS_result
29.06.2018 11:02:01 | LHC@home | Backing off 00:07:07 on upload of evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0_r1830430149_ATLAS_result
29.06.2018 11:09:09 | LHC@home | Started upload of evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0_r1830430149_ATLAS_result
29.06.2018 11:11:23 | LHC@home | Backing off 00:12:22 on upload of evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0_r1830430149_ATLAS_result
29.06.2018 11:23:46 | LHC@home | Started upload of evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0_r1830430149_ATLAS_result
29.06.2018 11:26:35 | LHC@home | Backing off 00:19:15 on upload of evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0_r1830430149_ATLAS_result
29.06.2018 11:47:19 | LHC@home | Started upload of evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0_r1830430149_ATLAS_result
29.06.2018 11:49:47 | LHC@home | Backing off 00:40:19 on upload of evxMDm0ywssnyYickojUe11pABFKDmABFKDmZ0NNDmABFKDmt4oKNo_0_r1830430149_ATLAS_result


maybe the problem lies in the connection from inside CERN to outside CERN since you (the people inside CERN) dont see problems and we (outside CERN) experience problems.
65) Message boards : Theory Application : New Version 263.60 (Message 35691)
Posted 29 Jun 2018 by gyllic
Post:
Instead the logs show that CVMFS ignores my local proxy and configures a CERN repository and a CERN proxy:
Guest Log: 2.4.4.0 3508 0 24896 6527 3 1 183730 10240000 2 65024 0 20 95 13 40 http://cvmfs-stratum-one.cern.ch/cvmfs/grid.cern.ch http://128.142.33.31:3125 1
same here
66) Message boards : ATLAS application : ATLAS native_mt fail (Message 35633)
Posted 24 Jun 2018 by gyllic
Post:
Got some problems with properties, I think:
https://lhcathome.cern.ch/lhcathome/result.php?resultid=199031637
Tasks crash after ten minutes.
Can I do anything about that?
looks like your cvmfs installation is not correct:
Checking for CVMFS
ls: Zugriff auf '/cvmfs/atlas.cern.ch/repo/sw' nicht möglich: Datei oder Verzeichnis nicht gefunden
cvmfs_config doesn't exist, check cvmfs with cmd ls /cvmfs/atlas.cern.ch/repo/sw
ls /cvmfs/atlas.cern.ch/repo/sw failed,aborting the jobs

Did you install cvmfs, and if yes, how?

Btw, you also need to install singularity (if ot already done) for running native ATLAS on most Linux OS's.
67) Message boards : ATLAS application : Unable to upload an Atlas task (Message 35598)
Posted 21 Jun 2018 by gyllic
Post:
Is it still bad today? I don't see any problems on my hosts but they are all inside CERN...
I still cant upload most of the results (1 out of 4 was successfully uploaded today in the morning), e.g.:

21.06.2018 08:53:45 | LHC@home | Started upload of YEkMDmjVMpsnyYickojUe11pABFKDmABFKDmQlpODmABFKDmzqUZsn_0_r1771124134_ATLAS_result
21.06.2018 09:00:36 | LHC@home | Backing off 00:27:39 on upload of YEkMDmjVMpsnyYickojUe11pABFKDmABFKDmQlpODmABFKDmzqUZsn_0_r1771124134_ATLAS_result
21.06.2018 09:41:45 | LHC@home | Backing off 00:38:05 on upload of YEkMDmjVMpsnyYickojUe11pABFKDmABFKDmQlpODmABFKDmzqUZsn_0_r1771124134_ATLAS_result
68) Message boards : ATLAS application : 8-core ATLAS Work unit have rached 100% and are still running after 28 hours. Is that ok? (Message 35571)
Posted 19 Jun 2018 by gyllic
Post:
Is this normal or should I abort it?
Abort it.

...
CPU time 00:00:13
...
cpu time is 13 seconds after a runtime of over a day. something is wrong and you can abort it. additionally, this WU has already been crunched by another computer. running it twice is a waste of cpu resources.
69) Message boards : ATLAS application : Unable to upload an Atlas task (Message 35566)
Posted 19 Jun 2018 by gyllic
Post:
anyone else having result upload problems?

19.06.2018 09:36:10 | LHC@home | Computation for task q6HMDmwuEpsnyYickojUe11pABFKDmABFKDmu8sKDmABFKDmqauBzm_0 finished
19.06.2018 09:36:13 | LHC@home | Started upload of q6HMDmwuEpsnyYickojUe11pABFKDmABFKDmu8sKDmABFKDmqauBzm_0_r1484590330_ATLAS_result
19.06.2018 09:38:26 | LHC@home | Backing off 00:02:46 on upload of q6HMDmwuEpsnyYickojUe11pABFKDmABFKDmu8sKDmABFKDmqauBzm_0_r1484590330_ATLAS_result
19.06.2018 09:41:13 | LHC@home | Started upload of q6HMDmwuEpsnyYickojUe11pABFKDmABFKDmu8sKDmABFKDmqauBzm_0_r1484590330_ATLAS_result
19.06.2018 09:44:52 | LHC@home | Backing off 00:07:21 on upload of q6HMDmwuEpsnyYickojUe11pABFKDmABFKDmu8sKDmABFKDmqauBzm_0_r1484590330_ATLAS_result
19.06.2018 09:52:13 | LHC@home | Started upload of q6HMDmwuEpsnyYickojUe11pABFKDmABFKDmu8sKDmABFKDmqauBzm_0_r1484590330_ATLAS_result
19.06.2018 09:54:30 | LHC@home | Backing off 00:10:59 on upload of q6HMDmwuEpsnyYickojUe11pABFKDmABFKDmu8sKDmABFKDmqauBzm_0_r1484590330_ATLAS_result
19.06.2018 10:05:58 | LHC@home | Started upload of q6HMDmwuEpsnyYickojUe11pABFKDmABFKDmu8sKDmABFKDmqauBzm_0_r1484590330_ATLAS_result
19.06.2018 10:27:40 | LHC@home | Backing off 00:24:09 on upload of q6HMDmwuEpsnyYickojUe11pABFKDmABFKDmu8sKDmABFKDmqauBzm_0_r1484590330_ATLAS_result
19.06.2018 10:51:50 | LHC@home | Started upload of q6HMDmwuEpsnyYickojUe11pABFKDmABFKDmu8sKDmABFKDmqauBzm_0_r1484590330_ATLAS_result
19.06.2018 10:55:36 | LHC@home | Backing off 00:57:05 on upload of q6HMDmwuEpsnyYickojUe11pABFKDmABFKDmu8sKDmABFKDmqauBzm_0_r1484590330_ATLAS_result
19.06.2018 11:52:42 | LHC@home | Started upload of q6HMDmwuEpsnyYickojUe11pABFKDmABFKDmu8sKDmABFKDmqauBzm_0_r1484590330_ATLAS_result
19.06.2018 12:22:14 |  | Project communication failed: attempting access to reference site
19.06.2018 12:22:14 | LHC@home | Temporarily failed upload of q6HMDmwuEpsnyYickojUe11pABFKDmABFKDmu8sKDmABFKDmqauBzm_0_r1484590330_ATLAS_result: transient HTTP error
19.06.2018 12:22:14 | LHC@home | Backing off 01:17:25 on upload of q6HMDmwuEpsnyYickojUe11pABFKDmABFKDmu8sKDmABFKDmqauBzm_0_r1484590330_ATLAS_result
19.06.2018 12:22:17 |  | Internet access OK - project servers may be temporarily down.
70) Message boards : ATLAS application : ATLAS issues (Message 35525)
Posted 14 Jun 2018 by gyllic
Post:
cvmfs should now work.

Your most recent tasks show that singularity is not installed.
If your OS is not SLC6 (which is obviously the case) you also have to install singularity:
https://singularity.lbl.gov/

When singularity is working you should finally be good to go.
71) Message boards : Number crunching : Checklist Version 3 for Atlas@Home (and other VM-based Projects) on your PC (Message 35516)
Posted 13 Jun 2018 by gyllic
Post:
@ gyllic when you say "one core task" is that an ATLAS one core task you are referring to?
yes.
If you struggle with an insufficient amount of RAM in your PC for using all your CPU cores efficiently, you could try running the native ATLAS app. This app, however, is only running on Linux, but has better efficiency and needs much less RAM compared to atlas vbox tasks. Unfortunately, the only way to force to get atlas native tasks is to remove vbox from your system entirely (or by "manipulating" the boinc config), so this system would then be ATLAS native only (and sixtrack).
72) Message boards : Number crunching : Checklist Version 3 for Atlas@Home (and other VM-based Projects) on your PC (Message 35507)
Posted 13 Jun 2018 by gyllic
Post:
You may consider stop using 8 core vbox atlas tasks since their cpu efficiency (cpu time/(run time * number of vbox cores)) is pretty bad.
For that reason, generally speaking, using more than 4 cores per vbox atlas task is not a good idea. The efficiency of a one core task is the best.
Since one 8 cores task needs much less RAM than eight 1 core tasks, you have to chose, depending on your available RAM, the best number of cores per atlas task for your setup.
73) Message boards : ATLAS application : Guide for building everything from sources to run native ATLAS on Debian 9 (Stretch) (Message 35421)
Posted 3 Jun 2018 by gyllic
Post:
If you look at your pc which crunches ATLAS tasks https://lhcathome.cern.ch/lhcathome/results.php?hostid=10546150, almost every vbox atlas task that i have seen is NOT producing valid results. Boinc says it is valid and gives you credit, but there is no HITS file (which is the scientific needed result file) produced by the vbox tasks and runtimes are way to short. So, all your vbox tasks are running and give you credit, but there is no scientific result produced, i.e. this is a total waste of CPU time. Only the native tasks produced valid results with HITS files.
So it does not make any sense to compare your vbox credits with the native credits. You should check your vbox setup or switch to native only. Generally speaking, native has the benefits which computezrmle described. Additional, it can use the cvmfs cache for every new native tasks, the cvmfs cache within the vbox task is only available for that one specific vbox task and gets deleted when the vbox task is removed.
74) Message boards : ATLAS application : ATLAS being sent, even though not requested. (Message 35353)
Posted 23 May 2018 by gyllic
Post:
I have an i7-4790 machine (Ubuntu 16.04) with VirtualBox 5.2.10 installed. I have selected only LHCb and Theory, but am getting ATLAS too, about half the work units in fact.
I have enabled "accept work from other applications?", but the server status shows that the others always have work available. So why am I getting any ATLAS at all?
Maybe this have something to do with the "allow beta apps" option in your preferences. Try to disable this option and see if you still get ATLAS tasks.
75) Message boards : ATLAS application : Guide for building everything from sources to run native ATLAS on Debian 9 (Stretch) (Message 35275)
Posted 16 May 2018 by gyllic
Post:
CVMFS_QUOTA_LIMIT=40960
A bit too much to serve only ATLAS.
But, if anybody doesn't know what to do with 40GB RAM ...
;-)
The CVMFS_QUOTA_LIMIT defines the size of the local cache directory (in this case of the /scratch/cvmfs directory). It is a soft quota, so there should be additional free space on the hard drive. Maybe 40GB is a bit too much, but this PC has a 1TB hard drive and it is only used for ATLAS, so plenty of free and otherwise unused storage :-) .

Once the local job cache runs dry, new tasks often fail a few seconds after their start.
This can be solved by running "sudo cvmfs_config wipecache" just before a new task starts.
From docu: "If you're about to change a current cache location or if you decrease the quota, run sudo cvmfs_config wipecache first in order to wipe out the current location."
76) Message boards : ATLAS application : Guide for building everything from sources to run native ATLAS on Debian 9 (Stretch) (Message 35271)
Posted 15 May 2018 by gyllic
Post:
I just wanted to ask about the CVMFS configuration:

CVMFS_HTTP_PROXY="http://206.167.181.94:3128


I guess this is your private squid?
I just took the default.local file located at http://atlasathome.cern.ch/boinc_conf/default.local which is downloaded by the setup script provided in your "announcement" post (so this is not my persoal squid). Is it not valid anymore?

You could mention that if people don't have access to or want to set up their own squid they can set

CVMFS_HTTP_PROXY=DIRECT
That is a good idea! Unfortunately i cant edit the original post anymore (or at least i dont know how :-) ).

I've added a link to here from this Q&A thread

Also on that thread (this post) it is described how to disable VBox tasks in BOINC configuration.
Thank you. I would add the informations to the original post, but i cant edit it anymore.
With pleasure :-) @computezrmle: thank you :-)
77) Questions and Answers : Windows : Scheduler request failed: HTTP internal server error (Message 35266)
Posted 15 May 2018 by gyllic
Post:
Are you connected to the new domain?:
https://lhcathome.cern.ch/lhcathome/
78) Message boards : ATLAS application : Guide for building everything from sources to run native ATLAS on Debian 9 (Stretch) (Message 35239)
Posted 12 May 2018 by gyllic
Post:
I'm happy that it helped :-) .

I received an ATLAS task and it started running but top shows processes named VBoxHeadless, VBoxSvc and VBoxXPCOMIPCD which suggests to me that the ATLAS task is running on VirtualBox rather than native.
Yes, you got vbox tasks.
You surely have enabled beta apps in your LHC@home preferences https://lhcathome.cern.ch/lhcathome/prefs.php?subset=project?! The only way i know to force getting native atlas tasks is to remove virtualbox completely from your PC. Maybe you can also tell boinc somehow that it is not installed although it is. Otherwise, as far as i know, it is random or not possible to get native ones.
79) Message boards : ATLAS application : Guide for building everything from sources to run native ATLAS on Debian 9 (Stretch) (Message 35234)
Posted 12 May 2018 by gyllic
Post:
Hi Guys!

This is a short guide for building every program you need to run native ATLAS tasks on Debian 9 (Stretch) from scratch. I have set up a new PC with a Debian 9.4 netinstallation (without any desktop environement) and thought that a guide like this may help others. You can install most of the programs from repositories, but this guide shows you how to compile and set up everything from source. With compiling the programs from sources, you get the latest available code (and not the sometimes rather old packages from the repositories). Since the PC is not connected to any display and instead managed from an other PC, no GUI things are compiled for boinc. At the time this is written, boinc is in version 7.11.0, cvmfs in verison 2.6.0 and singularity in version 2.5.1. And please apologize for my bad english :-).

To run the native ATLAS tasks you need:

- linux (debian in this case, others are possible) https://debian.org
- cvmfs (cern virtual machine file system) https://cernvm.cern.ch/portal/filesystem
- singularity (container) https://singularity.lbl.gov/
- and, of course, boinc. https://boinc.berkeley.edu/

Also, you need to have root rights on your PC and you have to accept beta apps in your LHC@home settings.

Let's start to compile and install boinc from sources:

1. update packages and install all new required packages:
sudo apt update
sudo apt install git build-essential pkg-config libsqlite3-dev libssl-dev  libcurl4-openssl-dev m4 dh-autoreconf zlib1g-dev

2. Add user boinc (no root rights, i chose no password):
sudo adduser boinc

3. Change user, clone git repository, compile and install boinc:
su
su boinc
cd
mkdir boinc_source
git clone https://github.com/BOINC/boinc.git boinc_source
cd boinc_source
./_autosetup
./configure --disable-server --disable-manager --enable-optimize 
make
su
make install

4. Start and stop client to generate needed files:
/usr/local/etc/init.d/boinc-client start
/usr/local/etc/init.d/boinc-client stop
exit
cd

5. To be able to manage this boinc client from another PC, you have to edit the "gui_rpc_auth.cfg" file which is located in the boinc (user) home directory. A radom password is already written in this file. Write a new password of your choice in the file if you want, save and close it. On a multiuser computer, this should be protected against access by other users. Additional, the file "remote_hosts.cfg" has to be created in the boinc (user) home directory. Write all IPs in this file from which you want to access this boinc client, e.g.:
192.168.0.2
192.168.0.3

6. Start the boinc client again (as root). Now you should be able to connect from a different PC (with boinc manager), add LHC@home and adjust the settings as needed (you maybe have to increase the allowed memory). Boinc should now be good to go. You have to start the boinc client after every PC restart (to change that, google will propably help you :-) ).

Build and install cvmfs:

1. Change to a different (e.g. your) user, e.g. testing
su testing

2. install all required packages:
sudo apt install autotools-dev cmake libcap-dev libssl-dev libfuse-dev pkg-config libattr1-dev patch python-dev unzip uuid-dev valgrind libz-dev gawk perl psmisc autofs fuse curl attr libfuse2 zlib1g gdb uuid-dev uuid adduser

3. make a directory, clone git repository, build and install cvmfs:
cd
mkdir cvmfs_source
git clone https://github.com/cvmfs/cvmfs.git cvmfs_source
cd cvmfs_source
mkdir -p build
cd build
cmake ../
make
sudo make install

4. make a directory that is used as cache:
sudo mkdir -p /scratch/cvmfs

5. Add the file default.local into /etc/cvmfs/
sudo nano /etc/cvmfs/default.local

and add the following to it (here the cache size is set to 40GB, change the "CVMFS_QUOTA_LIMIT" to change it):
CVMFS_REPOSITORIES=atlas.cern.ch,atlas-condb.cern.ch,grid.cern.ch
CVMFS_CACHE_BASE=/scratch/cvmfs
CVMFS_QUOTA_LIMIT=40960
#CVMFS_HTTP_PROXY="http://202.122.33.53:3128"
#CVMFS_HTTP_PROXY='http://ca-proxy-meyrin.cern.ch:3128;http://ca-proxy.cern.ch:3128;http://ca01.cern.ch:3128|http://ca02.cern.ch:3128|http://ca03.cern.ch:3128|http://ca04.cern.ch:3128|http://ca05.cern.ch:3128|http://ca06.cern.ch:3128'
CVMFS_HTTP_PROXY="http://206.167.181.94:3128|http://ca-proxy.cern.ch:3128|http://kraken01.westgrid.ca:3128|http://atlascaq4.triumf.ca:3128"

6. Setup cvmfs for the first time:
sudo cvmfs_config setup

7. Test if cvmfs is working (you should get all "Ok"). If it fails, try "sudo service autofs restart" and try to probe again.
cvmfs_config probe


Build and install singularity:

1. First, install required packages:
sudo apt install libarchive-dev squashfs-tools

2. Change directory, make a new one, clone git repository, build singularity and install it:
cd
mkdir singularity_source
git clone https://github.com/singularityware/singularity.git singularity_source/
cd singularity_source/
./autogen.sh
./configure --prefix=/usr/local
make
sudo make install

3. Test if it works (you should get an output):
singularity --version


If everything worked well, you should now have everything you need to run native ATLAS tasks on Debian 9. The same process may also work on other Linux OS's (especially on debian derivates like ubuntu), but it was only tested on Debian 9 (Stretch).

Feel free to report mistakes, improvements, ask questions, make any suggestions, etc. It would be much appreciated!

Gyllic
80) Message boards : ATLAS application : what's the average share of finished tasks with hits created? (Message 35179)
Posted 5 May 2018 by gyllic
Post:
1) yes, the OS still is WinXP, because I am running GPUGRID with two GTX980ti, and any OS beyond XP increases the GPU processing time by about 20%, due to the WDDM overhead in the newer OSs.
In fact, GPUGRID had announced some time ago that XP support will end by April 2018, so I was expecting to need to upgrade the machine to Win10 anyway. However, so far they still support XP.
Is MS still providing security updates for XP? Is this WDDM thing also true for Linux? Otherwise you might consider running Linux with the native ATLAS app which is working like a charm (as long as you dont stop and restart the running tasks (you will still produce valid results but if you restart them they start all over again i think)) and needs much less RAM. For GPU performance you can install the proprietary nvidia driver for linux.


Previous 20 · Next 20


©2024 CERN