Message boards : Number crunching : Recommended CVMFS Configuration for Native Apps - Comments and Questions
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2541
Credit: 254,608,838
RAC: 23,290
Message 44233 - Posted: 30 Jan 2021, 16:33:24 UTC

This is a discussion thread to post comments and questions regarding the CVMFS Configuration used by LHC@home:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5594

A previous version of the HowTo and older comments can be found here:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5342
ID: 44233 · Report as offensive     Reply Quote
[AF] Hydrosaure
Avatar

Send message
Joined: 8 May 17
Posts: 13
Credit: 40,619,782
RAC: 2,329
Message 44240 - Posted: 31 Jan 2021, 9:13:53 UTC - in response to Message 44233.  

Hello,

My systems are all configured to use at least 4GB of local CVMFS cache and according to stats, they seem to reach pretty high hit ratios.
Here is just a look at a couple of them:


Would a proxy still help here ?

Also are there specific Squid options one should set in regards to caching CVMFS data (ie. max object size, retention or refresh policy) ?
ID: 44240 · Report as offensive     Reply Quote
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2541
Credit: 254,608,838
RAC: 23,290
Message 44242 - Posted: 31 Jan 2021, 10:09:33 UTC - in response to Message 44240.  

My systems ... seem to reach pretty high hit ratios.
Would a proxy still help here ?

Yes.
Here are 2 major reasons.

Each CVMFS client does only serve tasks that are running on the same box (or inside the same VM).
A single Squid serves all boxes and all VMs running at your site.


Tasks like ATLAS or CMS heavily use CERN's Frontier system.
Frontier requests data via HTTP but unlike CVMFS it has no own local cache.
A local Squid closes this gap and serves most Frontier requests from it's cache.



Also are there specific Squid options one should set in regards to caching CVMFS data (ie. max object size, retention or refresh policy) ?

It's all covered by the squid.conf in the Squid HowTo.

Some of Squid's original settings have been made decades ago and focus on surfing the web via slow connections.
The suggestions in this forum extend the original settings and are based on experience and analysing the data flow created by LHC@home.
Nonetheless, surfing arbitrary internet pages with this settings is still possible but since most of them use HTTPS the hitrates for them would drop to 0 %.
Questions regarding specific Squid options should be asked in the Squid thread.
ID: 44242 · Report as offensive     Reply Quote
maeax

Send message
Joined: 2 May 07
Posts: 2244
Credit: 173,902,375
RAC: 307
Message 44247 - Posted: 1 Feb 2021, 1:26:38 UTC
Last modified: 1 Feb 2021, 1:34:04 UTC

ID: 44247 · Report as offensive     Reply Quote
Jim1348

Send message
Joined: 15 Nov 14
Posts: 602
Credit: 24,371,321
RAC: 0
Message 44268 - Posted: 4 Feb 2021, 22:09:55 UTC - in response to Message 44233.  

Very nice; thanks.
But I think it should be pointed out that the automatic configuration download no longer applies, insofar as I can see.
(sudo wget https://lhcathome.cern.ch/lhcathome/download/default.local -O /etc/cvmfs/default.local)

Maybe it could be updated?
ID: 44268 · Report as offensive     Reply Quote
wolfman1360

Send message
Joined: 17 Feb 17
Posts: 42
Credit: 2,589,736
RAC: 0
Message 44404 - Posted: 27 Feb 2021, 7:08:14 UTC - in response to Message 44268.  

Very nice; thanks.
But I think it should be pointed out that the automatic configuration download no longer applies, insofar as I can see.
(sudo wget https://lhcathome.cern.ch/lhcathome/download/default.local -O /etc/cvmfs/default.local)

Maybe it could be updated?

I had this problem, as well. Probing immediately failed.
Perhaps that file could be updated with the minimum needed configuration, although I'm still unclear how one can actually optimize their configuration if it is just 1 or 2 machines on the same connection.
ID: 44404 · Report as offensive     Reply Quote
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2541
Credit: 254,608,838
RAC: 23,290
Message 44405 - Posted: 27 Feb 2021, 8:51:57 UTC - in response to Message 44404.  

Perhaps that file could be updated with the minimum needed configuration ...

The file on the server is already up to date.
Be aware that it includes 2 optional settings (with proxy/without proxy) and one of them has to be activated by the user.

In general:
Native apps require more settings to be done by the user.
This is easier, faster and more reliable than to guess certain values.
In addition some steps require to be done by root.



although I'm still unclear how one can actually optimize their configuration

The simple answer

Cache as much as possible as close as possible to the point were it is used.
To avoid less efficient effort focus on the major bottlenecks first.


More LHC@home specific

CVMFS is heavily used but has it's own cache - one cache instance per machine.
A machine can't share it's CVMFS cache with other machines.
Each VM counts as individual machine.
Outdated or missing data is requested from the project servers.

Frontier is heavily used by ATLAS and CMS. It has no own local cache.
Each app sends all Frontier requests to the project servers.


Cloudflare's openhtc.io infrastructure helps to distribute CVMFS and Frontier data.
They run a very fast worldwide network and one of their proxy caches will most likely be located much closer to your clients than any project server.

VBox apps use openhtc.io by default but users running native apps have to set "CVMFS_USE_CDN=yes" in their CVMFS configuration.
This is disabled in the default configuration because lots of computers in various datacenters use special connections and require this to be set "OFF".


A local HTTP proxy closes the gap between openhtc.io and the local clients.
It can cache data for all local CVMFS and Frontier clients as well as offload openhtc.io and the project servers.
ID: 44405 · Report as offensive     Reply Quote
wolfman1360

Send message
Joined: 17 Feb 17
Posts: 42
Credit: 2,589,736
RAC: 0
Message 44414 - Posted: 27 Feb 2021, 20:36:31 UTC - in response to Message 44405.  

Perhaps that file could be updated with the minimum needed configuration ...

The file on the server is already up to date.
Be aware that it includes 2 optional settings (with proxy/without proxy) and one of them has to be activated by the user.

In general:
Native apps require more settings to be done by the user.
This is easier, faster and more reliable than to guess certain values.
In addition some steps require to be done by root.



although I'm still unclear how one can actually optimize their configuration

The simple answer

Cache as much as possible as close as possible to the point were it is used.
To avoid less efficient effort focus on the major bottlenecks first.


More LHC@home specific

CVMFS is heavily used but has it's own cache - one cache instance per machine.
A machine can't share it's CVMFS cache with other machines.
Each VM counts as individual machine.
Outdated or missing data is requested from the project servers.

Frontier is heavily used by ATLAS and CMS. It has no own local cache.
Each app sends all Frontier requests to the project servers.


Cloudflare's openhtc.io infrastructure helps to distribute CVMFS and Frontier data.
They run a very fast worldwide network and one of their proxy caches will most likely be located much closer to your clients than any project server.

VBox apps use openhtc.io by default but users running native apps have to set "CVMFS_USE_CDN=yes" in their CVMFS configuration.
This is disabled in the default configuration because lots of computers in various datacenters use special connections and require this to be set "OFF".


A local HTTP proxy closes the gap between openhtc.io and the local clients.
It can cache data for all local CVMFS and Frontier clients as well as offload openhtc.io and the project servers.

How does one go about cacheing as much as possible?
Not sure what happened in my case, then, since as soon as I downloaded
https://lhcathome.cern.ch/lhcathome/download/default.local -O /etc/cvmfs/default.local
I got immediate failures after probing.
Running the listed items in the how to fixed my issues, and I believe I also added the line containing openhtc.io.

Thank you for the help and excellent clarification.
ID: 44414 · Report as offensive     Reply Quote
maeax

Send message
Joined: 2 May 07
Posts: 2244
Credit: 173,902,375
RAC: 307
Message 44417 - Posted: 28 Feb 2021, 9:36:13 UTC - in response to Message 44405.  

VBox apps use openhtc.io by default but users running native apps have to set "CVMFS_USE_CDN=yes" in their CVMFS configuration.
This is disabled in the default configuration because lots of computers in various datacenters use special connections and require this to be set "OFF".

Release Notes from CVMFS-Documentation 2.7.5
2.1 Release Notes for CernVM-FS 2.7.5CernVM-FS 2.7.5 is a patch release.
It contains several bugfixes for the client.As with previous releases, upgrading clients should be seamless just by installing the new package from therepository. As usual, we recommend to update only a few worker nodes first and gradually ramp up once the newversion proves to work correctly.
Please take special care when upgrading a cvmfs client in NFS mode.Stratum 0 and stratum 1 servers do not necessarily need to update from version 2.7.4.2.1.1
Bug Fixes and Improvements•
[client] fix rare crash when kernel meta-data caches operate close to 4GB (CVM-1918)•
[client] let mount helper detect when CVMFS_HTTP_PROXY is defined but empty•
[client] add CVMFS_CLIENT_PROFILE and CVMFS_USE_CDN to the list of known parameters in cvmfs_config

Atlas-Applet in Windows is using CVMFS 2.6.3.
ID: 44417 · Report as offensive     Reply Quote
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2541
Credit: 254,608,838
RAC: 23,290
Message 44418 - Posted: 28 Feb 2021, 14:08:31 UTC - in response to Message 44417.  

Atlas-Applet in Windows is using CVMFS 2.6.3.

CVMFS_USE_CDN makes it easier to switch between the traditional CVMFS server list and the Cloudflare server list.
Older setups had to configure this manually which is still possible.
It's all fine as long as an application from this project uses Cloudflare servers.

Even CMS VMs that use v2.4.4.0 work fine.
Related to CVMFS_USE_CDN it's more important to use a recent cvmfs-config-default package than to upgrade the CVMFS client:
http://ecsft.cern.ch/dist/cvmfs/cvmfs-config/
ID: 44418 · Report as offensive     Reply Quote
Aurum
Avatar

Send message
Joined: 12 Jun 18
Posts: 126
Credit: 53,906,164
RAC: 0
Message 44809 - Posted: 24 Apr 2021, 20:05:50 UTC

What does this mean???
cvmfs_config stat
/usr/bin/cvmfs_config: line 907: cd: /cvmfs/atlas.cern.ch: Transport endpoint is not connected
ID: 44809 · Report as offensive     Reply Quote
Ken_g6

Send message
Joined: 4 Jul 06
Posts: 7
Credit: 339,475
RAC: 0
Message 46862 - Posted: 10 Jun 2022, 4:19:21 UTC

I have an old native app setup. I did the steps in the Howto V2 today. It didn't set up the CDN until I also installed the latest CVMFS config.

Also, will any caching proxy do? I set up an old copy of Polipo.
ID: 46862 · Report as offensive     Reply Quote
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2541
Credit: 254,608,838
RAC: 23,290
Message 46863 - Posted: 10 Jun 2022, 5:16:48 UTC - in response to Message 46862.  

Found an ATLAS native log of a task that succeeded:
[2022-06-09 19:17:48] Checking for CVMFS
[2022-06-09 19:17:48] Probing /cvmfs/atlas.cern.ch... OK
[2022-06-09 19:17:48] Probing /cvmfs/atlas-condb.cern.ch... OK
[2022-06-09 19:17:48] Running cvmfs_config stat atlas.cern.ch
[2022-06-09 19:17:48] VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE
[2022-06-09 19:17:48] 2.7.1.0 8420 286 49476 105303 0 78 2841033 4194305 786 65024 0 238387 99.9362 703 40 http://cvmfs-s1fnal.opensciencegrid.org/cvmfs/atlas.cern.ch http://127.0.0.1:8123 1
[2022-06-09 19:17:48] CVMFS is ok
[2022-06-09 19:17:48] Efficiency of ATLAS tasks can be improved by the following measure(s):
[2022-06-09 19:17:48] The CVMFS client on this computer should be configured to use Cloudflare's openhtc.io.
[2022-06-09 19:17:48] Further information can be found at the LHC@home message board.



Nonetheless, there are some points that should be changed:

1.
"CVMFS_USE_CDN=yes" should be set in /etc/cvmfs/default.local.

Prior to "cvmfs_config reload" you can show the new configuration if you run:
cvmfs_config showconfig -s atlas.cern.ch |grep CVMFS_SERVER_URL

It should return a list of "*.openhtc.io" servers instead of the stratum-ones.



2.
127.0.0.1 should not be used as proxy IP as it allows only processes from the same box to access that network IP.
That's why CVMFS can contact a proxy on the same box but another box in your network can't, not even a VM on the proxy box.

Instead, configure your proxy to listen to the LAN IP of the box (e.g. 192.168.x.y) and configure your clients to connect to the proxy via that IP.


Also, will any caching proxy do? I set up an old copy of Polipo.

In principle, yes.
CVMFS talks HTTP, hence each HTTP proxy should be able to handle the requests.
Your log mentioned above shows that a proxy is used for CVMFS.

But:
The Squid configuration given in this forum contains some efficiency settings, e.g. cache large files except ATLAS EVNT files, and I don't know if they can be used for other proxies.
ID: 46863 · Report as offensive     Reply Quote
Profile Dingo
Avatar

Send message
Joined: 27 Sep 04
Posts: 12
Credit: 917,772
RAC: 0
Message 47418 - Posted: 25 Oct 2022, 3:17:46 UTC
Last modified: 25 Oct 2022, 3:50:23 UTC

I just installed cvmfs on my Ubuntu machines but I am still getting an error.

Before I installed cvmfs I got this error:
<core_client_version>7.16.6</core_client_version>
<![CDATA[
<message>
process exited with code 195 (0xc3, -61)</message>
<stderr_txt>
22:31:22 (1258264): wrapper (7.7.26015): starting
22:31:22 (1258264): wrapper: running run_atlas (--nthreads 12)
[2022-10-24 22:31:22] Arguments: --nthreads 12
[2022-10-24 22:31:22] Threads: 12
[2022-10-24 22:31:22] Checking for CVMFS
[2022-10-24 22:31:22] No cvmfs_config command found, will try listing directly
[2022-10-24 22:31:22] ls: cannot access '/cvmfs/atlas.cern.ch/repo/sw': No such file or directory
[2022-10-24 22:31:22] Failed to list /cvmfs/atlas.cern.ch/repo/sw
[2022-10-24 22:31:22] ** It looks like CVMFS is not installed on this host.
[2022-10-24 22:31:22] ** CVMFS is required to run ATLAS native tasks and can be installed following https://cvmfs.readthedocs.io/en/stable/cpt-quickstart.html
[2022-10-24 22:31:22] ** and setting 'CVMFS_REPOSITORIES=atlas.cern.ch,atlas-condb.cern.ch' in /etc/cvmfs/default.local
22:41:23 (1258264): run_atlas exited; CPU time 0.010115
22:41:23 (1258264): app exit status: 0x1
22:41:23 (1258264): called boinc_finish(195)


So I installed cvmfs and checked the install and did not get any errors when I used
 cvmfs_config probe 


I also configured cvmfs as per the web page with
sed -i 's%#+dir:/etc/auto.master.d%+dir:/etc/auto.master.d%' /etc/auto.master
systemctl restart autofs


I downloaded a new task https://lhcathome.cern.ch/lhcathome/result.php?resultid=367523657

After it ran I am still getting the error:

<core_client_version>7.16.6</core_client_version>
<![CDATA[
<message>
process exited with code 195 (0xc3, -61)</message>
<stderr_txt>
03:03:32 (6989): wrapper (7.7.26015): starting
03:03:32 (6989): wrapper: running run_atlas (--nthreads 12)
[2022-10-25 03:03:32] Arguments: --nthreads 12
[2022-10-25 03:03:32] Threads: 12
2022-10-25 03:03:32] Checking for CVMFS
[2022-10-25 03:03:32] Probing /cvmfs/atlas.cern.ch... Failed!
[2022-10-25 03:03:32] Probing /cvmfs/atlas-condb.cern.ch... Failed!
[2022-10-25 03:03:32] cvmfs_config probe atlas.cern.ch atlas-condb.cern.ch failed, aborting the job
03:13:33 (6989): run_atlas exited; CPU time 0.163581
03:13:33 (6989): app exit status: 0x1
03:13:33 (6989): called boinc_finish(195)


Proud Founder and member of BOINC@AUSTRALIA



Have a look at my WebCam
ID: 47418 · Report as offensive     Reply Quote
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2541
Credit: 254,608,838
RAC: 23,290
Message 47419 - Posted: 25 Oct 2022, 5:18:06 UTC - in response to Message 47418.  

Did you also install the latest config package?
It can be found here:
http://ecsft.cern.ch/dist/cvmfs/cvmfs-config/
Download the one matching your OS and starting with "cvmfs-config-default-latest...".

Then modify/create /etc/cvmfs/default.local according to the advice here and in this forum.
If unsure, post that file.


Run "cvmfs_config probe" first from a console.
As long as this fails it makes no sense to download any ATLAS/Theory native task.
This just results in lots of downloads and failed tasks.
ID: 47419 · Report as offensive     Reply Quote
Profile Dingo
Avatar

Send message
Joined: 27 Sep 04
Posts: 12
Credit: 917,772
RAC: 0
Message 47420 - Posted: 25 Oct 2022, 6:33:21 UTC
Last modified: 25 Oct 2022, 6:39:45 UTC

I don't know which file to download from http://ecsft.cern.ch/dist/cvmfs/cvmfs-config/ I have Linux Ubuntu 20.04 5.4.0-131-generic #147-Ubuntu SMP Fri Oct 14 17:07:22 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux



OK I started from scratch and did the following:

cd ~/
wget https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest_all.deb
sudo dpkg -i cvmfs-release-latest_all.deb
rm -f cvmfs-release-latest_all.deb
sudo apt-get update
sudo apt-get install cvmfs cvmfs-config-default
sudo cvmfs_config setup
sudo wget https://lhcathome.cern.ch/lhcathome/download/default.local -O /etc/cvmfs/default.local
sudo cvmfs_config reload
sudo sed -i '$ a\kernel.unprivileged_userns_clone = 1' /etc/sysctl.conf
sudo sysctl -p
sudo wget http://lhcathome.cern.ch/lhcathome/download/create-boinc-cgroup -O /sbin/create-boinc-cgroup
sudo wget http://lhcathome.cern.ch/lhcathome/download/boinc-client.service -O /etc/systemd/system/boinc-client.service
sudo systemctl daemon-reload
sudo systemctl restart boinc-client



I downloaded a new task https://lhcathome.cern.ch/lhcathome/result.php?resultid=367712964 and it is now I still get the error


[2022-10-25 05:44:27] Threads: 12
[2022-10-25 05:44:27] Checking for CVMFS
[2022-10-25 05:44:28] Probing /cvmfs/atlas.cern.ch... Failed!
[2022-10-25 05:44:28] Probing /cvmfs/atlas-condb.cern.ch... Failed!
[2022-10-25 05:44:28] cvmfs_config probe atlas.cern.ch atlas-condb.cern.ch failed, aborting the job


What command do I need to run ??

Proud Founder and member of BOINC@AUSTRALIA



Have a look at my WebCam
ID: 47420 · Report as offensive     Reply Quote
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2541
Credit: 254,608,838
RAC: 23,290
Message 47421 - Posted: 25 Oct 2022, 17:51:02 UTC - in response to Message 47420.  

I downloaded a new task ...

As already mentioned
This makes no sense as long as you run "cvmfs_config probe" in a console window and get this output:
Probing /cvmfs/atlas.cern.ch... Failed!

You just burn all native tasks as long as your CVMFS issue is not fixed!



I don't know which file to download ...

Since you mentioned "Ubuntu" this one should be fine:
http://ecsft.cern.ch/dist/cvmfs/cvmfs-config/cvmfs-config-default_latest_all.deb
Install it after or together with the main CVMFS client package, then run "cvmfs_config setup".

In your /etc/cvmfs/default.local add/uncomment this line:
CVMFS_HTTP_PROXY="auto;DIRECT"
Be aware of the quoting. It must be double quotes here because of the ";" within the value.

CVMFS requires user namespaces enabled.
How to do this depends on the linux kernel you are running.

This is a quick check:
sudo sysctl user.max_user_namespaces
Must not be "0"

Other hints can be found in the internet or here:
https://lhcathomedev.cern.ch/lhcathome-dev/forum_thread.php?id=486&postid=6520


Reboot after you made the changes.




create-boinc-cgroup and boinc-client.service don't play a role for now.
First focus on the CVMFS configuration.
ID: 47421 · Report as offensive     Reply Quote
Profile Dingo
Avatar

Send message
Joined: 27 Sep 04
Posts: 12
Credit: 917,772
RAC: 0
Message 47423 - Posted: 26 Oct 2022, 6:19:07 UTC - in response to Message 47421.  

Just too much trouble for an old fart like me. I will just run them in VirtualBox.

Thanks for trying to help me though.
ID: 47423 · Report as offensive     Reply Quote
AndreyOR

Send message
Joined: 8 Dec 19
Posts: 37
Credit: 7,587,438
RAC: 0
Message 47588 - Posted: 14 Dec 2022, 11:26:23 UTC

Has anyone seen a scenario where running cvmfs_config probe returns success but native ATLAS fails due to such a probe failure when ran by the wrapper at task startup? What could be some possible reasons?
For context, WSL2 now supports systemd. I enabled it to see how things go with the hope of being able to run ATLAS multi-core on WSL2. It didn't go well for ATLAS as the probe failed according to the error log, however, it succeeds when I run it in isolation from the command prompt.
ID: 47588 · Report as offensive     Reply Quote
kotenok2000
Avatar

Send message
Joined: 21 Feb 11
Posts: 72
Credit: 570,086
RAC: 1
Message 47983 - Posted: 8 Apr 2023, 1:52:08 UTC

You can run cvmfs_config stat piped to column -t -s=' ' to make it more readable

 user@debian-lhcathome:~$ cvmfs_config stat 
 
Running /usr/bin/cvmfs_config stat cvmfs-config.cern.ch:
VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE
2.10.1.0 1396 128 25060 22 3 1 1428226 10240001 0 130560 0 142 100.000 19 4 http://s1cern-cvmfs.openhtc.io/cvmfs/cvmfs-config.cern.ch DIRECT 1
 
Running /usr/bin/cvmfs_config stat atlas.cern.ch:
VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE
2.10.1.0 1489 128 49464 117681 0 59 1428226 10240001 0 130560 0 163411 99.930 2354 35 http://s1cern-cvmfs.openhtc.io/cvmfs/atlas.cern.ch DIRECT 1
 
Running /usr/bin/cvmfs_config stat sft.cern.ch:
VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE
2.10.1.0 2039 127 30844 25608 3 5 1428226 10240001 0 130560 0 460 99.160 41 2 http://s1cern-cvmfs.openhtc.io/cvmfs/sft.cern.ch DIRECT 1
 
compared to
 user@debian-lhcathome:~$  cvmfs_config stat | column -t -s=' ' 
Running   /usr/bin/cvmfs_config  stat       cvmfs-config.cern.ch:                                                                                                                                                                                                   
VERSION   PID                    UPTIME(M)  MEM(K)                 REVISION  EXPIRES(M)  NOCATALOGS  CACHEUSE(K)  CACHEMAX(K)  NOFDUSE  NOFDMAX  NOIOERR  NOOPEN  HITRATE(%)  RX(K)  SPEED(K/S)  HOST                                                       PROXY   ONLINE
2.10.1.0  1396                   129        25060                  22        2           1           1428226      10240001     0        130560   0        142     100.000     19     4           http://s1cern-cvmfs.openhtc.io/cvmfs/cvmfs-config.cern.ch  DIRECT  1
Running   /usr/bin/cvmfs_config  stat       atlas.cern.ch:                                                                                                                                                                                                          
VERSION   PID                    UPTIME(M)  MEM(K)                 REVISION  EXPIRES(M)  NOCATALOGS  CACHEUSE(K)  CACHEMAX(K)  NOFDUSE  NOFDMAX  NOIOERR  NOOPEN  HITRATE(%)  RX(K)  SPEED(K/S)  HOST                                                       PROXY   ONLINE
2.10.1.0  1489                   129        49464                  117681    0           59          1428226      10240001     0        130560   0        163411  99.930      2354   35          http://s1cern-cvmfs.openhtc.io/cvmfs/atlas.cern.ch         DIRECT  1
Running   /usr/bin/cvmfs_config  stat       sft.cern.ch:                                                                                                                                                                                                            
VERSION   PID                    UPTIME(M)  MEM(K)                 REVISION  EXPIRES(M)  NOCATALOGS  CACHEUSE(K)  CACHEMAX(K)  NOFDUSE  NOFDMAX  NOIOERR  NOOPEN  HITRATE(%)  RX(K)  SPEED(K/S)  HOST                                                       PROXY   ONLINE
2.10.1.0  2039                   127        30844                  25608     3           5           1428226      10240001     0        130560   0        460     99.160      41     2           http://s1cern-cvmfs.openhtc.io/cvmfs/sft.cern.ch           DIRECT  1
^C
user@debian-lhcathome:~$ 
  
ID: 47983 · Report as offensive     Reply Quote
1 · 2 · Next

Message boards : Number crunching : Recommended CVMFS Configuration for Native Apps - Comments and Questions


©2024 CERN