Message boards : ATLAS application : Guide for building everything from sources to run native ATLAS on Debian 9 (Stretch)
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
gyllic

Send message
Joined: 9 Dec 14
Posts: 202
Credit: 2,533,875
RAC: 0
Message 35234 - Posted: 12 May 2018, 10:09:56 UTC

Hi Guys!

This is a short guide for building every program you need to run native ATLAS tasks on Debian 9 (Stretch) from scratch. I have set up a new PC with a Debian 9.4 netinstallation (without any desktop environement) and thought that a guide like this may help others. You can install most of the programs from repositories, but this guide shows you how to compile and set up everything from source. With compiling the programs from sources, you get the latest available code (and not the sometimes rather old packages from the repositories). Since the PC is not connected to any display and instead managed from an other PC, no GUI things are compiled for boinc. At the time this is written, boinc is in version 7.11.0, cvmfs in verison 2.6.0 and singularity in version 2.5.1. And please apologize for my bad english :-).

To run the native ATLAS tasks you need:

- linux (debian in this case, others are possible) https://debian.org
- cvmfs (cern virtual machine file system) https://cernvm.cern.ch/portal/filesystem
- singularity (container) https://singularity.lbl.gov/
- and, of course, boinc. https://boinc.berkeley.edu/

Also, you need to have root rights on your PC and you have to accept beta apps in your LHC@home settings.

Let's start to compile and install boinc from sources:

1. update packages and install all new required packages:
sudo apt update
sudo apt install git build-essential pkg-config libsqlite3-dev libssl-dev  libcurl4-openssl-dev m4 dh-autoreconf zlib1g-dev

2. Add user boinc (no root rights, i chose no password):
sudo adduser boinc

3. Change user, clone git repository, compile and install boinc:
su
su boinc
cd
mkdir boinc_source
git clone https://github.com/BOINC/boinc.git boinc_source
cd boinc_source
./_autosetup
./configure --disable-server --disable-manager --enable-optimize 
make
su
make install

4. Start and stop client to generate needed files:
/usr/local/etc/init.d/boinc-client start
/usr/local/etc/init.d/boinc-client stop
exit
cd

5. To be able to manage this boinc client from another PC, you have to edit the "gui_rpc_auth.cfg" file which is located in the boinc (user) home directory. A radom password is already written in this file. Write a new password of your choice in the file if you want, save and close it. On a multiuser computer, this should be protected against access by other users. Additional, the file "remote_hosts.cfg" has to be created in the boinc (user) home directory. Write all IPs in this file from which you want to access this boinc client, e.g.:
192.168.0.2
192.168.0.3

6. Start the boinc client again (as root). Now you should be able to connect from a different PC (with boinc manager), add LHC@home and adjust the settings as needed (you maybe have to increase the allowed memory). Boinc should now be good to go. You have to start the boinc client after every PC restart (to change that, google will propably help you :-) ).

Build and install cvmfs:

1. Change to a different (e.g. your) user, e.g. testing
su testing

2. install all required packages:
sudo apt install autotools-dev cmake libcap-dev libssl-dev libfuse-dev pkg-config libattr1-dev patch python-dev unzip uuid-dev valgrind libz-dev gawk perl psmisc autofs fuse curl attr libfuse2 zlib1g gdb uuid-dev uuid adduser

3. make a directory, clone git repository, build and install cvmfs:
cd
mkdir cvmfs_source
git clone https://github.com/cvmfs/cvmfs.git cvmfs_source
cd cvmfs_source
mkdir -p build
cd build
cmake ../
make
sudo make install

4. make a directory that is used as cache:
sudo mkdir -p /scratch/cvmfs

5. Add the file default.local into /etc/cvmfs/
sudo nano /etc/cvmfs/default.local

and add the following to it (here the cache size is set to 40GB, change the "CVMFS_QUOTA_LIMIT" to change it):
CVMFS_REPOSITORIES=atlas.cern.ch,atlas-condb.cern.ch,grid.cern.ch
CVMFS_CACHE_BASE=/scratch/cvmfs
CVMFS_QUOTA_LIMIT=40960
#CVMFS_HTTP_PROXY="http://202.122.33.53:3128"
#CVMFS_HTTP_PROXY='http://ca-proxy-meyrin.cern.ch:3128;http://ca-proxy.cern.ch:3128;http://ca01.cern.ch:3128|http://ca02.cern.ch:3128|http://ca03.cern.ch:3128|http://ca04.cern.ch:3128|http://ca05.cern.ch:3128|http://ca06.cern.ch:3128'
CVMFS_HTTP_PROXY="http://206.167.181.94:3128|http://ca-proxy.cern.ch:3128|http://kraken01.westgrid.ca:3128|http://atlascaq4.triumf.ca:3128"

6. Setup cvmfs for the first time:
sudo cvmfs_config setup

7. Test if cvmfs is working (you should get all "Ok"). If it fails, try "sudo service autofs restart" and try to probe again.
cvmfs_config probe


Build and install singularity:

1. First, install required packages:
sudo apt install libarchive-dev squashfs-tools

2. Change directory, make a new one, clone git repository, build singularity and install it:
cd
mkdir singularity_source
git clone https://github.com/singularityware/singularity.git singularity_source/
cd singularity_source/
./autogen.sh
./configure --prefix=/usr/local
make
sudo make install

3. Test if it works (you should get an output):
singularity --version


If everything worked well, you should now have everything you need to run native ATLAS tasks on Debian 9. The same process may also work on other Linux OS's (especially on debian derivates like ubuntu), but it was only tested on Debian 9 (Stretch).

Feel free to report mistakes, improvements, ask questions, make any suggestions, etc. It would be much appreciated!

Gyllic
ID: 35234 · Report as offensive     Reply Quote
bronco

Send message
Joined: 13 Apr 18
Posts: 443
Credit: 8,438,885
RAC: 0
Message 35235 - Posted: 12 May 2018, 12:55:41 UTC - in response to Message 35234.  

Thanks for your excellent guide! I've wanted to try Atlas native for some time but wasn't quite sure how to put it all together or whether it would work on my Ubuntu 16.04 system. Your directions made it easy. BTW, your English is excellent, no need for you to apologize :)
ID: 35235 · Report as offensive     Reply Quote
bronco

Send message
Joined: 13 Apr 18
Posts: 443
Credit: 8,438,885
RAC: 0
Message 35238 - Posted: 12 May 2018, 16:20:37 UTC

I received an ATLAS task and it started running but top shows processes named VBoxHeadless, VBoxSvc and VBoxXPCOMIPCD which suggests to me that the ATLAS task is running on VirtualBox rather than native. Do I need to configure something to tell BOINC to run it native instead of in VirtualBox?

Yes, this machine has crunched ATLAS tasks in VirtualBox prior to this attempt to run ATLAS native.
ID: 35238 · Report as offensive     Reply Quote
gyllic

Send message
Joined: 9 Dec 14
Posts: 202
Credit: 2,533,875
RAC: 0
Message 35239 - Posted: 12 May 2018, 17:02:56 UTC - in response to Message 35238.  
Last modified: 12 May 2018, 17:12:52 UTC

I'm happy that it helped :-) .

I received an ATLAS task and it started running but top shows processes named VBoxHeadless, VBoxSvc and VBoxXPCOMIPCD which suggests to me that the ATLAS task is running on VirtualBox rather than native.
Yes, you got vbox tasks.
You surely have enabled beta apps in your LHC@home preferences https://lhcathome.cern.ch/lhcathome/prefs.php?subset=project?! The only way i know to force getting native atlas tasks is to remove virtualbox completely from your PC. Maybe you can also tell boinc somehow that it is not installed although it is. Otherwise, as far as i know, it is random or not possible to get native ones.
ID: 35239 · Report as offensive     Reply Quote
bronco

Send message
Joined: 13 Apr 18
Posts: 443
Credit: 8,438,885
RAC: 0
Message 35241 - Posted: 12 May 2018, 17:57:14 UTC - in response to Message 35239.  

I tried enabling beta apps but still received vbox tasks. Configuring BOINC to ignore vbox sounds overly complicated so I think I'll just take the easy way out and setup another box sans vbox.
ID: 35241 · Report as offensive     Reply Quote
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2386
Credit: 222,901,893
RAC: 138,093
Message 35242 - Posted: 12 May 2018, 17:59:20 UTC

@gyllic
Very nice comment.
:-)

@bronco
The normal way would be to advice a volunteer to follow Yeti's checklist.
This would be helpful to run vbox task, but not ATLAS (native).

If you plan to run ATLAS (native) ONLY, then either:
- deinstall VirtualBox completely or
- tell your BOINC client to ignore it

The latter can be done by inserting "<dont_use_vbox>1</dont_use_vbox>" in your cc_config.xml and reload config files or restart your client.


If you plan to run vbox tasks beside ATLAS (native), e.g. Theory, CMS or LHCb, I would suggest to setup an extra BOINC client.
ID: 35242 · Report as offensive     Reply Quote
David Cameron
Project administrator
Project developer
Project scientist

Send message
Joined: 13 May 14
Posts: 387
Credit: 15,314,184
RAC: 0
Message 35269 - Posted: 15 May 2018, 14:45:26 UTC
Last modified: 15 May 2018, 14:51:45 UTC

Thanks a lot gyllic for these nice instructions! I just wanted to ask about the CVMFS configuration:

CVMFS_HTTP_PROXY="http://206.167.181.94:3128


I guess this is your private squid? You could mention that if people don't have access to or want to set up their own squid they can set

CVMFS_HTTP_PROXY=DIRECT


I've added a link to here from this Q&A thread

Also on that thread (this post) it is described how to disable VBox tasks in BOINC configuration.
ID: 35269 · Report as offensive     Reply Quote
gyllic

Send message
Joined: 9 Dec 14
Posts: 202
Credit: 2,533,875
RAC: 0
Message 35271 - Posted: 15 May 2018, 19:34:07 UTC - in response to Message 35269.  
Last modified: 15 May 2018, 19:45:00 UTC

I just wanted to ask about the CVMFS configuration:

CVMFS_HTTP_PROXY="http://206.167.181.94:3128


I guess this is your private squid?
I just took the default.local file located at http://atlasathome.cern.ch/boinc_conf/default.local which is downloaded by the setup script provided in your "announcement" post (so this is not my persoal squid). Is it not valid anymore?

You could mention that if people don't have access to or want to set up their own squid they can set

CVMFS_HTTP_PROXY=DIRECT
That is a good idea! Unfortunately i cant edit the original post anymore (or at least i dont know how :-) ).

I've added a link to here from this Q&A thread

Also on that thread (this post) it is described how to disable VBox tasks in BOINC configuration.
Thank you. I would add the informations to the original post, but i cant edit it anymore.
With pleasure :-) @computezrmle: thank you :-)
ID: 35271 · Report as offensive     Reply Quote
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2386
Credit: 222,901,893
RAC: 138,093
Message 35272 - Posted: 15 May 2018, 20:23:45 UTC

A few comments regarding CVMFS


CVMFS_QUOTA_LIMIT=40960
A bit too much to serve only ATLAS.
But, if anybody doesn't know what to do with 40GB RAM ...
;-)

As of my own experience with ATLAS (native) a setting of
CVMFS_QUOTA_LIMIT=2300
will result in a typical hitrate of 95%.


Once the local job cache runs dry, new tasks often fail a few seconds after their start.
This can be solved by running "sudo cvmfs_config wipecache" just before a new task starts.
ID: 35272 · Report as offensive     Reply Quote
gyllic

Send message
Joined: 9 Dec 14
Posts: 202
Credit: 2,533,875
RAC: 0
Message 35275 - Posted: 16 May 2018, 6:40:29 UTC - in response to Message 35272.  
Last modified: 16 May 2018, 6:41:26 UTC

CVMFS_QUOTA_LIMIT=40960
A bit too much to serve only ATLAS.
But, if anybody doesn't know what to do with 40GB RAM ...
;-)
The CVMFS_QUOTA_LIMIT defines the size of the local cache directory (in this case of the /scratch/cvmfs directory). It is a soft quota, so there should be additional free space on the hard drive. Maybe 40GB is a bit too much, but this PC has a 1TB hard drive and it is only used for ATLAS, so plenty of free and otherwise unused storage :-) .

Once the local job cache runs dry, new tasks often fail a few seconds after their start.
This can be solved by running "sudo cvmfs_config wipecache" just before a new task starts.
From docu: "If you're about to change a current cache location or if you decrease the quota, run sudo cvmfs_config wipecache first in order to wipe out the current location."
ID: 35275 · Report as offensive     Reply Quote
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2386
Credit: 222,901,893
RAC: 138,093
Message 35280 - Posted: 16 May 2018, 8:41:11 UTC - in response to Message 35275.  

... local cache directory (in this case of the /scratch/cvmfs directory). ... on the hard drive ...

Ahm, of course, you are right.
The cache is usually on a disk.
It's because I set it up in RAM via tmpfs.
:-O
ID: 35280 · Report as offensive     Reply Quote
mmonnin

Send message
Joined: 22 Mar 17
Posts: 55
Credit: 10,223,976
RAC: 2,477
Message 35405 - Posted: 31 May 2018, 22:57:13 UTC
Last modified: 31 May 2018, 22:57:44 UTC

I just went through the setup.

I needed this to get through the 1st git download. 18.04 min install.

sudo apt install git


Then I also needed this:

sudo apt-get install libtool m4 automake


before this:
./autogen.sh


Now to try and get some work.
ID: 35405 · Report as offensive     Reply Quote
mmonnin

Send message
Joined: 22 Mar 17
Posts: 55
Credit: 10,223,976
RAC: 2,477
Message 35408 - Posted: 1 Jun 2018, 13:50:50 UTC

Followup: Several tasks completed following the setup.

Skipped:
su testing

Added
sudo apt install git
sudo apt-get install libtool m4 automake

Modified per later post:
CVMFS_HTTP_PROXY=DIRECT

Very nice. thanks for the instructions. Well above my head in Linux.
ID: 35408 · Report as offensive     Reply Quote
mmonnin

Send message
Joined: 22 Mar 17
Posts: 55
Credit: 10,223,976
RAC: 2,477
Message 35417 - Posted: 2 Jun 2018, 16:40:30 UTC

How are others seeing performance between Native/VBox Altas apps? Native is 19x slower than VBox on the same PC.

https://lhcathome.cern.ch/lhcathome/results.php?userid=485872&offset=0&show_names=0&state=0&appid=14

Native: 2,892.54 25,856.68 324.96 ----0.75 Point/Min
Vbox: 1,027.45 509.98 121.48 ---- 14.29 Point/Min
ID: 35417 · Report as offensive     Reply Quote
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2386
Credit: 222,901,893
RAC: 138,093
Message 35419 - Posted: 3 Jun 2018, 9:38:35 UTC - in response to Message 35417.  

The native app benefits from the following facts:
- there's no setup/shutdown phase to prepare/stop a VM
- it gets most of it's runtime data from the local CVMFS cache (except the very first task)

Thus a native task will always be more resource efficient than a vbox task.


Unfortunately this can't be seen when you compare runtimes, CPU times or the credit points.

Runtimes
Doesn't care about including or excluding different app phases.

CPU-times
May include times used by peripheral apps like vboxwrapper.


Credit Points
Calculated by a very complex algorithm.
Delivers strange values even for a single app running on 1 host.
Nearly impossible to get reliable data for comparison of different apps.
ID: 35419 · Report as offensive     Reply Quote
mmonnin

Send message
Joined: 22 Mar 17
Posts: 55
Credit: 10,223,976
RAC: 2,477
Message 35420 - Posted: 3 Jun 2018, 11:56:30 UTC
Last modified: 3 Jun 2018, 12:03:43 UTC

From what I recall there was a long setup time. I compared CPU time which doesn't even include setup time where runtime goes up and CPU time does not.

The only benefit I saw was lower memory. I can't run all 32 threads with vbox or native Altas with 32gb of memory so theres basically no benefit at all and only credit penalties with a complicated, incomplete setup.

It is the same app, either setup yourself or bundled in a .vdi.
I've never seen an app vary by 19x due to the credit system. Runtime and credit of native and vbox versions was pretty consistent when comparing within version.
ID: 35420 · Report as offensive     Reply Quote
gyllic

Send message
Joined: 9 Dec 14
Posts: 202
Credit: 2,533,875
RAC: 0
Message 35421 - Posted: 3 Jun 2018, 12:47:23 UTC - in response to Message 35420.  
Last modified: 3 Jun 2018, 12:52:36 UTC

If you look at your pc which crunches ATLAS tasks https://lhcathome.cern.ch/lhcathome/results.php?hostid=10546150, almost every vbox atlas task that i have seen is NOT producing valid results. Boinc says it is valid and gives you credit, but there is no HITS file (which is the scientific needed result file) produced by the vbox tasks and runtimes are way to short. So, all your vbox tasks are running and give you credit, but there is no scientific result produced, i.e. this is a total waste of CPU time. Only the native tasks produced valid results with HITS files.
So it does not make any sense to compare your vbox credits with the native credits. You should check your vbox setup or switch to native only. Generally speaking, native has the benefits which computezrmle described. Additional, it can use the cvmfs cache for every new native tasks, the cvmfs cache within the vbox task is only available for that one specific vbox task and gets deleted when the vbox task is removed.
ID: 35421 · Report as offensive     Reply Quote
Crystal Pellet
Volunteer moderator
Volunteer tester

Send message
Joined: 14 Jan 10
Posts: 1268
Credit: 8,421,616
RAC: 2,139
Message 35422 - Posted: 3 Jun 2018, 14:33:51 UTC - in response to Message 35421.  
Last modified: 3 Jun 2018, 14:41:26 UTC

If you look at your pc which crunches ATLAS tasks https://lhcathome.cern.ch/lhcathome/results.php?hostid=10546150,...

From that host 27 tasks returned the last 24 hours: 23 failed and 4 were jobs finished OK - The last OK-one 2018-06-02 18:25:09.


https://bigpanda.cern.ch/jobs/?computingsite=BOINC_MCORE&modificationhost=mmonnin@TR1950x&hours=24&mode=nodrop&display_limit=100

Btw: The CPU-time from the VBox-tasks is the accumulated time of the 3 VBoxheadless processes belonging to that VM. Mostly 3rdchild is the only active exe.
ID: 35422 · Report as offensive     Reply Quote
bronco

Send message
Joined: 13 Apr 18
Posts: 443
Credit: 8,438,885
RAC: 0
Message 36260 - Posted: 4 Aug 2018, 8:21:26 UTC
Last modified: 4 Aug 2018, 8:23:15 UTC

About 1 month ago I used Gyllic's procedure to setup 2 Ubuntu hosts to run ATLAS native. Today I decided to try it with a 3rd Ubuntu host. The procedure at the top of this thread for compiling and installing CVMFS still works but the procedure for singularity fails at the "sudo ./autogen.sh" step. Yes, I did see mmonin's advice regarding autogen.sh but the problem is that the git download doesn't contain an autoconf.sh script. In fact the set of files git cloned into the singularity_source directory on the 2 hosts I setup a month ago is quite different than what it cloned today. I have no idea how to proceed, can someone point the way, please.
ID: 36260 · Report as offensive     Reply Quote
David Cameron
Project administrator
Project developer
Project scientist

Send message
Joined: 13 May 14
Posts: 387
Credit: 15,314,184
RAC: 0
Message 36277 - Posted: 6 Aug 2018, 14:40:28 UTC - in response to Message 36260.  
Last modified: 6 Aug 2018, 14:42:28 UTC

It seems like singularity is currently being completely rewritten in GO (instead of C). So rather than using the master it's better to use a stable tagged version. After git clone ... switch to tag 2.6.0:

git clone https://github.com/singularityware/singularity.git singularity_source/
cd singularity_source/
git checkout tags/2.6.0
./autogen.sh
...

Then you should have autogen.sh available. If you confirm this works we can ask gyllic to update his instructions.
ID: 36277 · Report as offensive     Reply Quote
1 · 2 · Next

Message boards : ATLAS application : Guide for building everything from sources to run native ATLAS on Debian 9 (Stretch)


©2024 CERN