Message boards : ATLAS application : ATLAS native - Configure CVMFS to work with openhtc.io
Message board moderation

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
computezrmle
Avatar

Send message
Joined: 15 Jun 08
Posts: 1140
Credit: 56,118,805
RAC: 96,975
Message 36867 - Posted: 24 Sep 2018, 20:12:45 UTC - in response to Message 36865.  

OK. I get the same warnings but my own setup works.
There's no difference to your's except one line that is Debian/Ubuntu specific.

If you get an OK when you run "cvmfs_config reload" followed by "cvmfs_config probe" then it should work.
"cvmfs_config stat" will tell you which server is currently configured.

If you get errors, try a "cvmfs_config wipecache".
ID: 36867 · Report as offensive     Reply Quote
Jim1348

Send message
Joined: 15 Nov 14
Posts: 330
Credit: 10,774,608
RAC: 16,629
Message 36869 - Posted: 24 Sep 2018, 20:53:55 UTC - in response to Message 36867.  

If you get an OK when you run "cvmfs_config reload" followed by "cvmfs_config probe" then it should work.

It worked OK, and I am running a native ATLAS now.

Thanks.
ID: 36869 · Report as offensive     Reply Quote
Sid

Send message
Joined: 26 Jul 12
Posts: 18
Credit: 1,725,281
RAC: 12,790
Message 37017 - Posted: 14 Oct 2018, 13:37:18 UTC - in response to Message 36869.  

I'm trying to run Atlas native application however I can see the following error:
This is not SLC6, need to run with Singularity....
Checking Singularity...
sh: 1: singularity: not found
Singularity is not installed, aborting

Seem the system can't find singularity.But I've already installed singularity:
$ sudo su - boinc
$ singularity

Linux container platform optimized for High Performance Computing (HPC) and
Enterprise Performance Computing (EPC)

Usage:
singularity [global options...]

The same is under the root.
What might be the problem?
ID: 37017 · Report as offensive     Reply Quote
computezrmle
Avatar

Send message
Joined: 15 Jun 08
Posts: 1140
Credit: 56,118,805
RAC: 96,975
Message 37018 - Posted: 14 Oct 2018, 13:54:17 UTC - in response to Message 37017.  

Did you follow gyllic's guide?

Be aware that ATLAS native is still beta and depending on your local environment it can be a bit tricky to get it running.


In addition you may consider to make your hosts visible for other volunteers as the logs sometimes contain important hints:
https://lhcathome.cern.ch/lhcathome/prefs.php?subset=project
ID: 37018 · Report as offensive     Reply Quote
Sid

Send message
Joined: 26 Jul 12
Posts: 18
Credit: 1,725,281
RAC: 12,790
Message 37020 - Posted: 14 Oct 2018, 16:58:54 UTC - in response to Message 37018.  

Did you follow gyllic's guide?

Be aware that ATLAS native is still beta and depending on your local environment it can be a bit tricky to get it running.


In addition you may consider to make your hosts visible for other volunteers as the logs sometimes contain important hints:
https://lhcathome.cern.ch/lhcathome/prefs.php?subset=project


Thank you.
Yes, I did follow this guide step by step - good guide, BTW.
I made my computers visible so other volunteers. Hope it helps to understand the root cause of my issue.
ID: 37020 · Report as offensive     Reply Quote
computezrmle
Avatar

Send message
Joined: 15 Jun 08
Posts: 1140
Credit: 56,118,805
RAC: 96,975
Message 37028 - Posted: 15 Oct 2018, 6:46:15 UTC - in response to Message 37020.  

Please run "cvmfs_config probe".
If this fails try a "cvmfs_config wipecache" and again a "cvmfs_config probe".

If it still fails run "cvmfs_config showconfig -s atlas.cern.ch" and post the output here.


If the cvmfs checks succeed it's most likely a singularity issue.
I suggest to open a new thread for that.
ID: 37028 · Report as offensive     Reply Quote
BITLab Argo

Send message
Joined: 16 Jul 05
Posts: 24
Credit: 35,251,537
RAC: 0
Message 37031 - Posted: 15 Oct 2018, 11:42:18 UTC - in response to Message 37017.  

I use
singularity exec -B /cvmfs /cvmfs/atlas.cern.ch/repo/images/singularity/x86_64-slc6.img hostname

to test singularity (this also depends on CVMFS working, of course).

I install singularity from RPMs and haven't seen your issue; I would guess that you have installed singularity somewhere non-standard and so need to restart the boinc client and make sure it's picking the path up correctly.

Else if you've a deeper problem then please start a new thread so others with the same problem can find it.


I'm trying to run Atlas native application however I can see the following error:
This is not SLC6, need to run with Singularity....
Checking Singularity...
sh: 1: singularity: not found
Singularity is not installed, aborting

Seem the system can't find singularity.But I've already installed singularity: ...

ID: 37031 · Report as offensive     Reply Quote
marmot
Avatar

Send message
Joined: 5 Nov 15
Posts: 127
Credit: 6,204,427
RAC: 197
Message 37784 - Posted: 19 Jan 2019, 1:07:16 UTC - in response to Message 35857.  

From the Cloudflare http://OpenHTC.io website:

"ask computing sites with many clients to supply their own caching proxy servers."
.

Any idea what number they assign to 'many'?

8, maybe 15 but certainly not 30 per individual IP address?

Is it measured per project?
Would they start objecting once BOINC volunteers break a 1000 clients all doing ATLAS even though each individual IP only has 4 to 6 clients?
ID: 37784 · Report as offensive     Reply Quote
computezrmle
Avatar

Send message
Joined: 15 Jun 08
Posts: 1140
Credit: 56,118,805
RAC: 96,975
Message 37789 - Posted: 19 Jan 2019, 9:06:25 UTC - in response to Message 37784.  

Don't worry.
It's written on the same page:
We [=CERN/Fermilab] have discussed it with them [=Cloudflare] and they said we’re not likely to cause problems if we keep the bandwidth low, so we plan to use it mostly for reducing latencies of applications with many small objects ...


ATM there are only 2 LHC@home apps that (may) make use of openhtc.io, the recent Theory app and ATLAS native.
ATLAS native only if it is configured to use openhtc.io, thus the "(may)".

ATLAS vbox is currently not configured to use openhtc.io.

ATLAS native shouldn't be a problem for the following reasons:
1. The huge onetimers like EVNT/HITS-files are always transferred directly via lhcathome-upload.cern.ch.
2. If your local CVMFS client is configured to use at least 4 GB cache, it would serve 99% of the requests without internet access.


Theory causes a bit more requests, but the file sizes are much smaller and the requests are spread among lots of proxy machines that are part of the Cloudflare network. Thus an arbitrary proxy machine has very little work regarding LHC@home.


Last but not least you may set up your own local proxy. A working basic configuration can be found here.
ID: 37789 · Report as offensive     Reply Quote
computezrmle
Avatar

Send message
Joined: 15 Jun 08
Posts: 1140
Credit: 56,118,805
RAC: 96,975
Message 39281 - Posted: 4 Jul 2019, 19:53:20 UTC

Volunteers running ATLAS native and/or Theory native are kindly asked to update their local CVMFS configuration.

Add the following parameter to /etc/cvmfs/default.local.
CVMFS_SEND_INFO_HEADER=yes


Create/Update the server list in /etc/cvmfs/domain.d/cern.ch.local.
Be aware that quoting is a must due to the ";" in the server list.
CVMFS_SERVER_URL="http://s1cern-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1ral-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1bnl-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1fnal-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1unl-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1asgc-cvmfs.openhtc.io:8080/cvmfs/@fqrn@;http://s1ihep-cvmfs.openhtc.io/cvmfs/@fqrn@"


Activate the changes as root
cvmfs_config reload



Both settings help Cloudflare to optimize the traffic between their proxies and CERN's stratum one servers, especially if the Cloudflare proxies encounter a CACHE_MISS.
Volunteers running VirtualBox VMs get the new settings automatically during the boot process.
ID: 39281 · Report as offensive     Reply Quote
Mumps [MM]

Send message
Joined: 7 Apr 11
Posts: 4
Credit: 56,682,229
RAC: 9,683
Message 39286 - Posted: 5 Jul 2019, 14:25:01 UTC - in response to Message 39281.  
Last modified: 5 Jul 2019, 14:34:28 UTC

Volunteers running ATLAS native and/or Theory native are kindly asked to update their local CVMFS configuration.

Create/Update the server list in /etc/cvmfs/domain.d/cern.ch.local.
Be aware that quoting is a must due to the ";" in the server list.
CVMFS_SERVER_URL="http://s1cern-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1ral-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1bnl-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1fnal-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1unl-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1asgc-cvmfs.openhtc.io:8080/cvmfs/@fqrn@;http://s1ihep-cvmfs.openhtc.io/cvmfs/@fqrn@"

I find a CVMFS_SERVER_URL in /etc/cvmfs/domain.d/cern.ch.conf. Would updating the one found there be appropriate? Or does it need to go in the .local file instead as indicated? And .local will override .conf, so we wouldn't need to remove the line from .conf, correct? Also, this is more important for users not using a local proxy server, correct?

Also, has this been updated in the initial files indicated for a fresh install? So, things like the Ubuntu Repository files and the "master" copy of the default.local file downloaded from
https://lhcathome.cern.ch/lhcathome/download/default.local
ID: 39286 · Report as offensive     Reply Quote
computezrmle
Avatar

Send message
Joined: 15 Jun 08
Posts: 1140
Credit: 56,118,805
RAC: 96,975
Message 39287 - Posted: 5 Jul 2019, 15:34:00 UTC - in response to Message 39286.  

I find a CVMFS_SERVER_URL in /etc/cvmfs/domain.d/cern.ch.conf. Would updating the one found there be appropriate? Or does it need to go in the .local file instead as indicated? And .local will override .conf, so we wouldn't need to remove the line from .conf, correct?

Right. *.conf files should be left untouched. If you need local settings, place them in the corresponding *.local files.


Also, this is more important for users not using a local proxy server, correct?

It's independent from the local proxy setting.
CERN runs only a couple of stratum one servers spread around the world.
Cloudflare runs hundreds of proxies and one of them will most likely be much closer to your computer's location than the closest stratum one.
This is why LHC@Home volunteers should configure *.openhtc.io.

A local proxy would support both configurations and avoids lots of traffic between your LAN and the internet.


Also, has this been updated in the initial files indicated for a fresh install? So, things like the Ubuntu Repository files ...

The basic configuration is made for lots of different users and sites, not only for LHC@Home volunteers.
Cloudflare's openhtc.io is mainly thought to be used by LHC@Home volunteers.
Hence the installation packets might not be updated.

... and the "master" copy of the default.local file downloaded from https://lhcathome.cern.ch/lhcathome/download/default.local

Good hint. This should indeed be updated.
ID: 39287 · Report as offensive     Reply Quote
Aurum

Send message
Joined: 12 Jun 18
Posts: 54
Credit: 17,129,742
RAC: 258,157
Message 39306 - Posted: 6 Jul 2019, 20:55:48 UTC - in response to Message 39281.  
Last modified: 6 Jul 2019, 21:01:18 UTC

I'm trying to figure out why I'm seeing so many failed ATLAS WUs. I did update:
sudo xed /etc/cvmfs/default.local
CVMFS_SEND_INFO_HEADER=yes
sudo xed /etc/cvmfs/domain.d/cern.ch.local
CVMFS_SERVER_URL="http://s1cern-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1ral-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1bnl-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1fnal-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1unl-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1asgc-cvmfs.openhtc.io:8080/cvmfs/@fqrn@;http://s1ihep-cvmfs.openhtc.io/cvmfs/@fqrn@"
sudo cvmfs_config reload
Yet I went back and checked and found that some computers lack the line:
CVMFS_SEND_INFO_HEADER=yes
Is there anything that might overwrite that file???
Is this update configuring for openhtc.io or is there something else I need to do???
ID: 39306 · Report as offensive     Reply Quote
computezrmle
Avatar

Send message
Joined: 15 Jun 08
Posts: 1140
Credit: 56,118,805
RAC: 96,975
Message 39309 - Posted: 7 Jul 2019, 6:11:12 UTC - in response to Message 39306.  

Theory native requires CVMFS but ATLAS native also requires Singularity.
At least this host is missing Singularity. Hence all ATLAS tasks fail.
https://lhcathome.cern.ch/lhcathome/results.php?hostid=10603415&offset=0&show_names=0&state=0&appid=14


Regarding the missing config line:
Are you sure you saved the changes?
Are you sure you checked the right host?


Additional comments:
During the last few days roughly 800-900 cores from your cluster requested work from LHC@home.
This is really very nice, but:
1. To feed this huge number of cores a local proxy is a must. Do you have one?
2. You may keep an eye on your upload direction. It's hopefully not close to saturation.
ID: 39309 · Report as offensive     Reply Quote
Aurum

Send message
Joined: 12 Jun 18
Posts: 54
Credit: 17,129,742
RAC: 258,157
Message 39313 - Posted: 7 Jul 2019, 11:25:33 UTC - in response to Message 39309.  
Last modified: 7 Jul 2019, 11:25:56 UTC

I checked Rig-13 that you linked and Singularity 2.6 is there in its folder in my home directory.
I figured out how that line "CVMFS_SEND_INFO_HEADER=yes" vanished. On Rig-13 I checked it first and it was there. I ran:
sudo cvmfs_config reload
and it listed no actions.
It makes no sense why the error says Singularity is not found so the only thing I know to do is reinstall all the stuff:
wget https://github.com/singularityware/singularity/releases/download/2.6.0/singularity-2.6.0.tar.gz ; tar xvf singularity-2.6.0.tar.gz ; cd singularity-2.6.0 ; sudo apt install libarchive-dev ; ./configure --prefix=/usr/local ; make ; sudo make install

wget https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest_all.deb ; sudo dpkg -i cvmfs-release-latest_all.deb ; rm -f cvmfs-release-latest_all.deb ; sudo apt-get update ; sudo apt-get install cvmfs ; sudo apt install glibc-doc open-iscsi watchdog

sudo wget https://lhcathomedev.cern.ch/lhcathome-dev/download/default.local -O /etc/cvmfs/default.local ; sudo cvmfs_config setup ; sudo echo "/cvmfs /etc/auto.cvmfs" > /etc/auto.master.d/cvmfs.autofs ; sudo systemctl restart autofs ; cvmfs_config probe

Then I checked the first "update" file:
sudo xed /etc/cvmfs/default.local
and CVMFS_SEND_INFO_HEADER=yes was missing. Since that file was created by the installation sequence it overwrote it. The second "update" file was only created by:
sudo xed /etc/cvmfs/domain.d/cern.ch.local
so it persisted across the reinstall. Then I ran:
sudo cvmfs_config reload
and got a long list of actions.

I don't know how to setup a local proxy. I read the SQUID link and it's outside my wheelhouse. I see "openhtc.io" in the title of this thread but don't know how to do it unless the sequence of commands above do it.
I have no complaint about ULing and DLing files from CERN. Depending on how I switch to ATLAS I may have ~240 WUs DLing and each is ~365 MB so it slows down.[/code]
ID: 39313 · Report as offensive     Reply Quote
bronco

Send message
Joined: 13 Apr 18
Posts: 443
Credit: 8,247,204
RAC: 10,030
Message 39317 - Posted: 8 Jul 2019, 13:27:54 UTC - in response to Message 39313.  

I don't know how to setup a local proxy. I read the SQUID link and it's outside my wheelhouse. I see "openhtc.io" in the title of this thread but don't know how to do it unless the sequence of commands above do it.
I have no complaint about ULing and DLing files from CERN. Depending on how I switch to ATLAS I may have ~240 WUs DLing and each is ~365 MB so it slows down.

The instructions for Squid are a bit confusing but you've got the CVMFS, the Singularity, you're over the hard part. Ask in the squid thread and I'll give you a working example. With 240 WUs DLing.... I just can't see it working nicely without a Squid. I can't even see it working badly unless you have outrageously fast connection and unlimited data.
ID: 39317 · Report as offensive     Reply Quote
Previous · 1 · 2

Message boards : ATLAS application : ATLAS native - Configure CVMFS to work with openhtc.io


©2019 CERN