Message boards :
ATLAS application :
ATLAS native - Configure CVMFS to work with openhtc.io
Message board moderation
Author | Message |
---|---|
Send message Joined: 15 Jun 08 Posts: 2536 Credit: 254,196,314 RAC: 57,637 |
Dave Dykstra from Fermilab lately presented Cloudflare's openhtc.io at the CHEP 2018 Conference. His presentation can be found here: https://indico.cern.ch/event/587955/contributions/2936824/ Volunteers running ATLAS native can already use openhtc.io when they change their local CVMFS configuration as shown below. 1. cd to the local folder /etc/cvmfs/domain.d. It usually contains a couple of "*.conf" files 2. Create a corresponding "*.local" file for each domain you want to switch to openhtc.io 3. Insert the following line into each "*.local" file: CVMFS_SERVER_URL='http://s1cern-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1fnal-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1ral-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1bnl-cvmfs.openhtc.io/cvmfs/@fqrn@' 4. Run "cvmfs_config showconfig -s" to check if the new configuration will be accepted 5. Run "cvmfs_config reload" to activate the new configuration 6. "cvmfs_config stat" should look like this example: Running /usr/bin/cvmfs_config stat atlas.cern.ch: VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE 2.5.0.0 10512 21230 66668 37686 1 58 2705096 3538945 2100 65024 0 11203 99.8126 24924 770 http://s1bnl-cvmfs.openhtc.io/cvmfs/atlas.cern.ch DIRECT 1 Volunteers using their own local squid would see "http://<IP_of_your_local_proxy>:3128" instead if "DIRECT". <edit> Have been informed that my wording "Cloudflare's openhtc.io" may be not be correct. Anybody who is interested in the correct wording may read Dave Dykstra's presentation. </edit> |
Send message Joined: 24 Oct 04 Posts: 1173 Credit: 54,872,750 RAC: 15,674 |
Yes our own Laurence Field is one of the co-authors Years ago I saw that field of grass and buffalo / bison above FermiLab Mars sure is bright tonight (depending on where you are) |
Send message Joined: 9 Dec 14 Posts: 202 Credit: 2,533,875 RAC: 0 |
nice, thanks for the infos and tipps, computezrmle! looks like it is working here. |
Send message Joined: 13 May 14 Posts: 387 Credit: 15,314,184 RAC: 0 |
Thanks a lot, it also works well for me (I'm also at the CHEP 2018 conference this week). |
Send message Joined: 13 Apr 18 Posts: 443 Credit: 8,438,885 RAC: 0 |
Volunteers running ATLAS native can already use openhtc.io when they change their local CVMFS configuration as shown below. In that folder I see 3 *.conf: cern.ch.conf egi.eu.conf opensciencegrid.org.conf
|
Send message Joined: 15 Nov 14 Posts: 602 Credit: 24,371,321 RAC: 0 |
In that folder I see 3 *.conf: Yes, that worked for me. I just created a "cern.ch.local" in that folder as directed, and it all worked. |
Send message Joined: 13 Apr 18 Posts: 443 Credit: 8,438,885 RAC: 0 |
@Jim1348 Marvelous. It seems to be working here too or at least it passed all the tests described by computezrmie in the op. I will assume it's caching since i don't know how to test and verify. Maybe what I don't know won't hurt me :) @ all you other Linux wizards (gyllic, computezrmie, etc)... Thanks for all your great advice on getting native ATLAS tasks working. It's nice to be able to wiggle the mouse and login without having the swap daemon go nuts and end up with a fubar task. |
Send message Joined: 15 Nov 14 Posts: 602 Credit: 24,371,321 RAC: 0 |
I will assume it's caching since i don't know how to test and verify. Maybe what I don't know won't hurt me :) One practical test is that in your Task details (stderr_txt file), you will see a "CPUUsage=" number. It should go up after the cache is activated. It depends on the number of cores you run per work unit, but I run two cores, and now get CPUUsage=195%. Before using the cache, I was getting around CPUUsage=189% or less. That is the payoff, increased CPU efficiency. |
Send message Joined: 13 Apr 18 Posts: 443 Credit: 8,438,885 RAC: 0 |
Thanks. I completed only 2 tasks without the caching setup so I don't have much of a baseline but it looks like caching is giving about a 6% increase in CPU usage. I consider it verified... cache is working :) |
Send message Joined: 2 May 07 Posts: 2243 Credit: 173,902,375 RAC: 1,652 |
It works also for me. At the moment in one SL69. Running /usr/bin/cvmfs_config stat cvmfs-config.cern.ch: VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE 2.5.0.0 8715 7 29220 8 3 1 3995592 4194304 0 65024 0 0 n/a 0 0 http://s1cern-cvmfs.openhtc.io/cvmfs/cvmfs-config.cern.ch DIRECT 1 Thanks for working out. |
Send message Joined: 15 Jun 08 Posts: 2536 Credit: 254,196,314 RAC: 57,637 |
bronco wrote: ... I will assume it's caching since i don't know how to test and verify. Yes, it's caching. To test it, simply run "cvmfs_config stat" and look at the HITRATE. Your individual cache size can be configured via CVMFS_QUOTA_LIMIT in "/etc/cvmfs/default.local". Typical hitrates in relation to the cache size: cache size: 2700 MB hitrate: 95 % cache size: 3500 MB hitrate: 99 % |
Send message Joined: 15 Nov 14 Posts: 602 Credit: 24,371,321 RAC: 0 |
To test it, simply run "cvmfs_config stat" and look at the HITRATE. I haven't changed the default settings, and get this: Running /usr/bin/cvmfs_config stat atlas.cern.ch: VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE 2.5.0.0 2749 2416 43680 37830 2 58 3273787 4096000 559 65024 0 296062 99.8868 378017 928 http://s1bnl-cvmfs.openhtc.io/cvmfs/atlas.cern.ch DIRECT 1 Running /usr/bin/cvmfs_config stat sft.cern.ch: VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE 2.5.0.0 16268 2089 23528 10053 3 3 3273787 4096000 7 65024 0 5623 99.7866 406 1 http://s1fnal-cvmfs.openhtc.io/cvmfs/sft.cern.ch DIRECT 1 So it appears that my HITRATE is over 99% if I am reading this correctly. |
Send message Joined: 15 Jun 08 Posts: 2536 Credit: 254,196,314 RAC: 57,637 |
... So it appears that my HITRATE is over 99% if I am reading this correctly. Yes. Your setup uses the default value for CVMFS_QUOTA_LIMIT (4GB) which is set in "/etc/cvmfs/default.conf". The value is shown as CACHEMAX(K) while CACHEUSE(K) shows how much is currently in use. Be aware that (by default) all configured repositories share the same local cache and it's total size. |
Send message Joined: 9 Dec 14 Posts: 202 Credit: 2,533,875 RAC: 0 |
So it appears that my HITRATE is over 99% if I am reading this correctly.Yes, you are reading this correctly. This hitrate shows that caching some data for ATLAS is a good idea and that using a local cache will improve efficiency. This is also one of the big benefits of the native app compared to the vbox app because the local cache of the vbox app gets deleted as soon as the VM has shut down and been removed. EDIT: looks like computezrmle was faster so ignore this post :-) |
Send message Joined: 15 Nov 14 Posts: 602 Credit: 24,371,321 RAC: 0 |
EDIT: looks like computezrmle was faster so ignore this post :-) I think it is a great post. I just wish that LHCb could do as well. |
Send message Joined: 15 Nov 14 Posts: 602 Credit: 24,371,321 RAC: 0 |
Be aware that (by default) all configured repositories share the same local cache and it's total size. OK. I have been running just one native ATLAS at a time (2 cores per work unit), which probably helps the efficiency. I will increase that to three WUs at a time and see what happens. I can increase the size of the cache if necessary. I have 32 GB just sitting there mostly unused, and this would be the best use I could put it to. Thanks again, as always. |
Send message Joined: 13 Apr 18 Posts: 443 Credit: 8,438,885 RAC: 0 |
So it appears that my HITRATE is over 99% if I am reading this correctly. I also stuck with the 4GB default and also get over 99% HITRATE. Thanks for all the great infos, guys. |
Send message Joined: 15 Nov 14 Posts: 602 Credit: 24,371,321 RAC: 0 |
I am attempting to run native ATLAS again with openhtc, but when I run a check I get: $ sudo cvmfs_config chksetup Warning: failed to use Geo-API with s1cern-cvmfs.openhtc.io Warning: failed to access http://s1bnl-cvmfs.openhtc.io/cvmfs/atlas.cern.ch/.cvmfspublished through proxy DIRECT Warning: failed to use Geo-API with s1bnl-cvmfs.openhtc.io Warning: failed to use Geo-API with s1cern-cvmfs.openhtc.io Warning: failed to access http://s1bnl-cvmfs.openhtc.io/cvmfs/atlas-condb.cern.ch/.cvmfspublished through proxy DIRECT Warning: failed to use Geo-API with s1bnl-cvmfs.openhtc.io Warning: failed to use Geo-API with s1cern-cvmfs.openhtc.io Warning: failed to access http://s1bnl-cvmfs.openhtc.io/cvmfs/grid.cern.ch/.cvmfspublished through proxy DIRECT Warning: failed to use Geo-API with s1bnl-cvmfs.openhtc.io Does this mean that native ATLAS won't work at all? Do I need to change something? |
Send message Joined: 15 Jun 08 Posts: 2536 Credit: 254,196,314 RAC: 57,637 |
Please run "cvmfs_config showconfig -s atlas.cern.ch" and post the output. |
Send message Joined: 15 Nov 14 Posts: 602 Credit: 24,371,321 RAC: 0 |
$ cvmfs_config showconfig -s atlas.cern.ch CVMFS_REPOSITORY_NAME=atlas.cern.ch CVMFS_BACKOFF_INIT=2 # from /etc/cvmfs/default.conf CVMFS_BACKOFF_MAX=10 # from /etc/cvmfs/default.conf CVMFS_BASE_ENV=1 # from /etc/cvmfs/default.conf CVMFS_CACHE_BASE=/var/lib/cvmfs # from /etc/cvmfs/default.conf CVMFS_CACHE_DIR=/var/lib/cvmfs/shared CVMFS_CHECK_PERMISSIONS=yes # from /etc/cvmfs/default.conf CVMFS_CLAIM_OWNERSHIP=yes # from /etc/cvmfs/default.conf CVMFS_DEFAULT_DOMAIN=cern.ch # from /etc/cvmfs/default.d/50-cern-debian.conf CVMFS_HOST_RESET_AFTER=1800 # from /etc/cvmfs/default.conf CVMFS_HTTP_PROXY=DIRECT # from /etc/cvmfs/default.local CVMFS_IGNORE_SIGNATURE=no # from /etc/cvmfs/default.conf CVMFS_KEYS_DIR=/etc/cvmfs/keys/cern.ch # from /etc/cvmfs/domain.d/cern.ch.conf CVMFS_LOW_SPEED_LIMIT=1024 # from /etc/cvmfs/default.conf CVMFS_MAX_RETRIES=1 # from /etc/cvmfs/default.conf CVMFS_MOUNT_DIR=/cvmfs # from /etc/cvmfs/default.conf CVMFS_MOUNT_RW=no # from /etc/cvmfs/default.conf CVMFS_NFILES=65536 # from /etc/cvmfs/default.conf CVMFS_NFS_SOURCE=no # from /etc/cvmfs/default.conf CVMFS_PAC_URLS=http://wpad/wpad.dat # from /etc/cvmfs/default.conf CVMFS_PROXY_RESET_AFTER=300 # from /etc/cvmfs/default.conf CVMFS_QUOTA_LIMIT=8192 # from /etc/cvmfs/default.local CVMFS_RELOAD_SOCKETS=/var/run/cvmfs # from /etc/cvmfs/default.conf CVMFS_REPOSITORIES=atlas.cern.ch,atlas-condb.cern.ch,grid.cern.ch # from /etc/cvmfs/default.local CVMFS_SEND_INFO_HEADER=no # from /etc/cvmfs/default.conf CVMFS_SERVER_URL='http://s1cern-cvmfs.openhtc.io/cvmfs/atlas.cern.ch;http://s1fnal-cvmfs.openhtc.io/cvmfs/atlas.cern.ch;http://s1ral-cvmfs.openhtc.io/cvmfs/atlas.cern.ch;http://s1bnl-cvmfs.openhtc.io/cvmfs/atlas.cern.ch' # from /etc/cvmfs/domain.d/cern.ch.local CVMFS_SHARED_CACHE=yes # from /etc/cvmfs/default.conf CVMFS_STRICT_MOUNT=no # from /etc/cvmfs/default.conf CVMFS_TIMEOUT=5 # from /etc/cvmfs/default.conf CVMFS_TIMEOUT_DIRECT=10 # from /etc/cvmfs/default.conf CVMFS_USE_GEOAPI=yes # from /etc/cvmfs/domain.d/cern.ch.conf CVMFS_USER=cvmfs # from /etc/cvmfs/default.conf |
©2024 CERN