Message boards : ATLAS application : ATLAS native - Configure CVMFS to work with openhtc.io
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2375
Credit: 221,662,250
RAC: 143,597
Message 35857 - Posted: 11 Jul 2018, 7:43:48 UTC
Last modified: 11 Jul 2018, 8:21:16 UTC

Dave Dykstra from Fermilab lately presented Cloudflare's openhtc.io at the CHEP 2018 Conference.
His presentation can be found here:
https://indico.cern.ch/event/587955/contributions/2936824/

Volunteers running ATLAS native can already use openhtc.io when they change their local CVMFS configuration as shown below.


1. cd to the local folder /etc/cvmfs/domain.d. It usually contains a couple of "*.conf" files
2. Create a corresponding "*.local" file for each domain you want to switch to openhtc.io
3. Insert the following line into each "*.local" file:
CVMFS_SERVER_URL='http://s1cern-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1fnal-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1ral-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1bnl-cvmfs.openhtc.io/cvmfs/@fqrn@'

4. Run "cvmfs_config showconfig -s" to check if the new configuration will be accepted
5. Run "cvmfs_config reload" to activate the new configuration
6. "cvmfs_config stat" should look like this example:
Running /usr/bin/cvmfs_config stat atlas.cern.ch:
VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE
2.5.0.0 10512 21230 66668 37686 1 58 2705096 3538945 2100 65024 0 11203 99.8126 24924 770 http://s1bnl-cvmfs.openhtc.io/cvmfs/atlas.cern.ch DIRECT 1

Volunteers using their own local squid would see "http://<IP_of_your_local_proxy>:3128" instead if "DIRECT".


<edit>
Have been informed that my wording "Cloudflare's openhtc.io" may be not be correct.
Anybody who is interested in the correct wording may read Dave Dykstra's presentation.
</edit>
ID: 35857 · Report as offensive
Profile Magic Quantum Mechanic
Avatar

Send message
Joined: 24 Oct 04
Posts: 1112
Credit: 49,476,392
RAC: 6,583
Message 35858 - Posted: 11 Jul 2018, 9:42:06 UTC

Yes our own Laurence Field is one of the co-authors

Years ago I saw that field of grass and buffalo / bison above FermiLab
Mars sure is bright tonight (depending on where you are)
ID: 35858 · Report as offensive
gyllic

Send message
Joined: 9 Dec 14
Posts: 202
Credit: 2,533,875
RAC: 0
Message 35881 - Posted: 12 Jul 2018, 21:49:05 UTC - in response to Message 35857.  

nice, thanks for the infos and tipps, computezrmle!

looks like it is working here.
ID: 35881 · Report as offensive
David Cameron
Project administrator
Project developer
Project scientist

Send message
Joined: 13 May 14
Posts: 387
Credit: 15,314,184
RAC: 0
Message 35882 - Posted: 13 Jul 2018, 6:27:05 UTC - in response to Message 35881.  

Thanks a lot, it also works well for me (I'm also at the CHEP 2018 conference this week).
ID: 35882 · Report as offensive
bronco

Send message
Joined: 13 Apr 18
Posts: 443
Credit: 8,438,885
RAC: 0
Message 35895 - Posted: 14 Jul 2018, 18:58:21 UTC - in response to Message 35857.  

Volunteers running ATLAS native can already use openhtc.io when they change their local CVMFS configuration as shown below.


1. cd to the local folder /etc/cvmfs/domain.d. It usually contains a couple of "*.conf" files
2. Create a corresponding "*.local" file for each domain you want to switch to openhtc.io


In that folder I see 3 *.conf:

    cern.ch.conf
    egi.eu.conf
    opensciencegrid.org.conf


I don't know the last two though they sound interesting. I know only cern.ch and I assume that to run ATLAS native tasks it would suffice to create a *.local file for only cern.ch.conf. Correct?

ID: 35895 · Report as offensive
Jim1348

Send message
Joined: 15 Nov 14
Posts: 602
Credit: 24,371,321
RAC: 0
Message 35896 - Posted: 14 Jul 2018, 19:15:44 UTC - in response to Message 35895.  

In that folder I see 3 *.conf:

    cern.ch.conf
    egi.eu.conf
    opensciencegrid.org.conf


I don't know the last two though they sound interesting. I know only cern.ch and I assume that to run ATLAS native tasks it would suffice to create a *.local file for only cern.ch.conf. Correct?


Yes, that worked for me. I just created a "cern.ch.local" in that folder as directed, and it all worked.
ID: 35896 · Report as offensive
bronco

Send message
Joined: 13 Apr 18
Posts: 443
Credit: 8,438,885
RAC: 0
Message 35898 - Posted: 14 Jul 2018, 20:27:10 UTC - in response to Message 35896.  
Last modified: 14 Jul 2018, 20:37:29 UTC

@Jim1348

Marvelous. It seems to be working here too or at least it passed all the tests described by computezrmie in the op. I will assume it's caching since i don't know how to test and verify. Maybe what I don't know won't hurt me :)

@ all you other Linux wizards (gyllic, computezrmie, etc)...

Thanks for all your great advice on getting native ATLAS tasks working. It's nice to be able to wiggle the mouse and login without having the swap daemon go nuts and end up with a fubar task.
ID: 35898 · Report as offensive
Jim1348

Send message
Joined: 15 Nov 14
Posts: 602
Credit: 24,371,321
RAC: 0
Message 35899 - Posted: 14 Jul 2018, 21:10:21 UTC - in response to Message 35898.  

I will assume it's caching since i don't know how to test and verify. Maybe what I don't know won't hurt me :)

One practical test is that in your Task details (stderr_txt file), you will see a "CPUUsage=" number. It should go up after the cache is activated. It depends on the number of cores you run per work unit, but I run two cores, and now get CPUUsage=195%. Before using the cache, I was getting around CPUUsage=189% or less. That is the payoff, increased CPU efficiency.
ID: 35899 · Report as offensive
bronco

Send message
Joined: 13 Apr 18
Posts: 443
Credit: 8,438,885
RAC: 0
Message 35901 - Posted: 14 Jul 2018, 23:12:35 UTC - in response to Message 35899.  

Thanks. I completed only 2 tasks without the caching setup so I don't have much of a baseline but it looks like caching is giving about a 6% increase in CPU usage. I consider it verified... cache is working :)
ID: 35901 · Report as offensive
maeax

Send message
Joined: 2 May 07
Posts: 2066
Credit: 155,451,690
RAC: 168,027
Message 35905 - Posted: 15 Jul 2018, 7:36:26 UTC
Last modified: 15 Jul 2018, 7:37:21 UTC

It works also for me. At the moment in one SL69.
Running /usr/bin/cvmfs_config stat cvmfs-config.cern.ch:
VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE
2.5.0.0 8715 7 29220 8 3 1 3995592 4194304 0 65024 0 0 n/a 0 0 http://s1cern-cvmfs.openhtc.io/cvmfs/cvmfs-config.cern.ch DIRECT 1

Thanks for working out.
ID: 35905 · Report as offensive
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2375
Credit: 221,662,250
RAC: 143,597
Message 35906 - Posted: 15 Jul 2018, 8:10:24 UTC - in response to Message 35898.  

bronco wrote:
... I will assume it's caching since i don't know how to test and verify.

Yes, it's caching.
To test it, simply run "cvmfs_config stat" and look at the HITRATE.

Your individual cache size can be configured via CVMFS_QUOTA_LIMIT in "/etc/cvmfs/default.local".

Typical hitrates in relation to the cache size:

cache size: 2700 MB
hitrate: 95 %

cache size: 3500 MB
hitrate: 99 %
ID: 35906 · Report as offensive
Jim1348

Send message
Joined: 15 Nov 14
Posts: 602
Credit: 24,371,321
RAC: 0
Message 35912 - Posted: 15 Jul 2018, 10:44:13 UTC - in response to Message 35906.  

To test it, simply run "cvmfs_config stat" and look at the HITRATE.

I haven't changed the default settings, and get this:

Running /usr/bin/cvmfs_config stat atlas.cern.ch:
VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE
2.5.0.0 2749 2416 43680 37830 2 58 3273787 4096000 559 65024 0 296062 99.8868 378017 928 http://s1bnl-cvmfs.openhtc.io/cvmfs/atlas.cern.ch DIRECT 1

Running /usr/bin/cvmfs_config stat sft.cern.ch:
VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE
2.5.0.0 16268 2089 23528 10053 3 3 3273787 4096000 7 65024 0 5623 99.7866 406 1 http://s1fnal-cvmfs.openhtc.io/cvmfs/sft.cern.ch DIRECT 1


So it appears that my HITRATE is over 99% if I am reading this correctly.
ID: 35912 · Report as offensive
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2375
Credit: 221,662,250
RAC: 143,597
Message 35914 - Posted: 15 Jul 2018, 11:23:08 UTC - in response to Message 35912.  

... So it appears that my HITRATE is over 99% if I am reading this correctly.

Yes.
Your setup uses the default value for CVMFS_QUOTA_LIMIT (4GB) which is set in "/etc/cvmfs/default.conf".
The value is shown as CACHEMAX(K) while CACHEUSE(K) shows how much is currently in use.

Be aware that (by default) all configured repositories share the same local cache and it's total size.
ID: 35914 · Report as offensive
gyllic

Send message
Joined: 9 Dec 14
Posts: 202
Credit: 2,533,875
RAC: 0
Message 35915 - Posted: 15 Jul 2018, 11:24:32 UTC - in response to Message 35912.  
Last modified: 15 Jul 2018, 11:25:49 UTC

So it appears that my HITRATE is over 99% if I am reading this correctly.
Yes, you are reading this correctly. This hitrate shows that caching some data for ATLAS is a good idea and that using a local cache will improve efficiency. This is also one of the big benefits of the native app compared to the vbox app because the local cache of the vbox app gets deleted as soon as the VM has shut down and been removed.

EDIT: looks like computezrmle was faster so ignore this post :-)
ID: 35915 · Report as offensive
Jim1348

Send message
Joined: 15 Nov 14
Posts: 602
Credit: 24,371,321
RAC: 0
Message 35916 - Posted: 15 Jul 2018, 12:32:31 UTC - in response to Message 35915.  

EDIT: looks like computezrmle was faster so ignore this post :-)

I think it is a great post. I just wish that LHCb could do as well.
ID: 35916 · Report as offensive
Jim1348

Send message
Joined: 15 Nov 14
Posts: 602
Credit: 24,371,321
RAC: 0
Message 35917 - Posted: 15 Jul 2018, 13:02:25 UTC - in response to Message 35914.  

Be aware that (by default) all configured repositories share the same local cache and it's total size.

OK. I have been running just one native ATLAS at a time (2 cores per work unit), which probably helps the efficiency. I will increase that to three WUs at a time and see what happens. I can increase the size of the cache if necessary. I have 32 GB just sitting there mostly unused, and this would be the best use I could put it to.

Thanks again, as always.
ID: 35917 · Report as offensive
bronco

Send message
Joined: 13 Apr 18
Posts: 443
Credit: 8,438,885
RAC: 0
Message 35925 - Posted: 15 Jul 2018, 22:58:36 UTC - in response to Message 35912.  

So it appears that my HITRATE is over 99% if I am reading this correctly.


I also stuck with the 4GB default and also get over 99% HITRATE.

Thanks for all the great infos, guys.
ID: 35925 · Report as offensive
Jim1348

Send message
Joined: 15 Nov 14
Posts: 602
Credit: 24,371,321
RAC: 0
Message 36863 - Posted: 24 Sep 2018, 19:02:09 UTC

I am attempting to run native ATLAS again with openhtc, but when I run a check I get:

$ sudo cvmfs_config chksetup
Warning: failed to use Geo-API with s1cern-cvmfs.openhtc.io
Warning: failed to access http://s1bnl-cvmfs.openhtc.io/cvmfs/atlas.cern.ch/.cvmfspublished through proxy DIRECT
Warning: failed to use Geo-API with s1bnl-cvmfs.openhtc.io
Warning: failed to use Geo-API with s1cern-cvmfs.openhtc.io
Warning: failed to access http://s1bnl-cvmfs.openhtc.io/cvmfs/atlas-condb.cern.ch/.cvmfspublished through proxy DIRECT
Warning: failed to use Geo-API with s1bnl-cvmfs.openhtc.io
Warning: failed to use Geo-API with s1cern-cvmfs.openhtc.io
Warning: failed to access http://s1bnl-cvmfs.openhtc.io/cvmfs/grid.cern.ch/.cvmfspublished through proxy DIRECT
Warning: failed to use Geo-API with s1bnl-cvmfs.openhtc.io

Does this mean that native ATLAS won't work at all? Do I need to change something?
ID: 36863 · Report as offensive
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2375
Credit: 221,662,250
RAC: 143,597
Message 36864 - Posted: 24 Sep 2018, 19:37:53 UTC - in response to Message 36863.  

Please run "cvmfs_config showconfig -s atlas.cern.ch" and post the output.
ID: 36864 · Report as offensive
Jim1348

Send message
Joined: 15 Nov 14
Posts: 602
Credit: 24,371,321
RAC: 0
Message 36865 - Posted: 24 Sep 2018, 19:42:51 UTC - in response to Message 36864.  

$ cvmfs_config showconfig -s atlas.cern.ch
CVMFS_REPOSITORY_NAME=atlas.cern.ch
CVMFS_BACKOFF_INIT=2 # from /etc/cvmfs/default.conf
CVMFS_BACKOFF_MAX=10 # from /etc/cvmfs/default.conf
CVMFS_BASE_ENV=1 # from /etc/cvmfs/default.conf
CVMFS_CACHE_BASE=/var/lib/cvmfs # from /etc/cvmfs/default.conf
CVMFS_CACHE_DIR=/var/lib/cvmfs/shared
CVMFS_CHECK_PERMISSIONS=yes # from /etc/cvmfs/default.conf
CVMFS_CLAIM_OWNERSHIP=yes # from /etc/cvmfs/default.conf
CVMFS_DEFAULT_DOMAIN=cern.ch # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_HOST_RESET_AFTER=1800 # from /etc/cvmfs/default.conf
CVMFS_HTTP_PROXY=DIRECT # from /etc/cvmfs/default.local
CVMFS_IGNORE_SIGNATURE=no # from /etc/cvmfs/default.conf
CVMFS_KEYS_DIR=/etc/cvmfs/keys/cern.ch # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_LOW_SPEED_LIMIT=1024 # from /etc/cvmfs/default.conf
CVMFS_MAX_RETRIES=1 # from /etc/cvmfs/default.conf
CVMFS_MOUNT_DIR=/cvmfs # from /etc/cvmfs/default.conf
CVMFS_MOUNT_RW=no # from /etc/cvmfs/default.conf
CVMFS_NFILES=65536 # from /etc/cvmfs/default.conf
CVMFS_NFS_SOURCE=no # from /etc/cvmfs/default.conf
CVMFS_PAC_URLS=http://wpad/wpad.dat # from /etc/cvmfs/default.conf
CVMFS_PROXY_RESET_AFTER=300 # from /etc/cvmfs/default.conf
CVMFS_QUOTA_LIMIT=8192 # from /etc/cvmfs/default.local
CVMFS_RELOAD_SOCKETS=/var/run/cvmfs # from /etc/cvmfs/default.conf
CVMFS_REPOSITORIES=atlas.cern.ch,atlas-condb.cern.ch,grid.cern.ch # from /etc/cvmfs/default.local
CVMFS_SEND_INFO_HEADER=no # from /etc/cvmfs/default.conf
CVMFS_SERVER_URL='http://s1cern-cvmfs.openhtc.io/cvmfs/atlas.cern.ch;http://s1fnal-cvmfs.openhtc.io/cvmfs/atlas.cern.ch;http://s1ral-cvmfs.openhtc.io/cvmfs/atlas.cern.ch;http://s1bnl-cvmfs.openhtc.io/cvmfs/atlas.cern.ch' # from /etc/cvmfs/domain.d/cern.ch.local
CVMFS_SHARED_CACHE=yes # from /etc/cvmfs/default.conf
CVMFS_STRICT_MOUNT=no # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT=5 # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT_DIRECT=10 # from /etc/cvmfs/default.conf
CVMFS_USE_GEOAPI=yes # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_USER=cvmfs # from /etc/cvmfs/default.conf
ID: 36865 · Report as offensive
1 · 2 · Next

Message boards : ATLAS application : ATLAS native - Configure CVMFS to work with openhtc.io


©2024 CERN