Message boards :
ATLAS application :
ATLAS native - Configure CVMFS to work with openhtc.io
Message board moderation
Previous · 1 · 2
Author | Message |
---|---|
Send message Joined: 15 Jun 08 Posts: 2488 Credit: 247,475,842 RAC: 120,938 |
OK. I get the same warnings but my own setup works. There's no difference to your's except one line that is Debian/Ubuntu specific. If you get an OK when you run "cvmfs_config reload" followed by "cvmfs_config probe" then it should work. "cvmfs_config stat" will tell you which server is currently configured. If you get errors, try a "cvmfs_config wipecache". |
Send message Joined: 15 Nov 14 Posts: 602 Credit: 24,371,321 RAC: 0 |
If you get an OK when you run "cvmfs_config reload" followed by "cvmfs_config probe" then it should work. It worked OK, and I am running a native ATLAS now. Thanks. |
Send message Joined: 26 Jul 12 Posts: 18 Credit: 2,456,826 RAC: 0 |
I'm trying to run Atlas native application however I can see the following error: This is not SLC6, need to run with Singularity.... Checking Singularity... sh: 1: singularity: not found Singularity is not installed, aborting Seem the system can't find singularity.But I've already installed singularity: $ sudo su - boinc $ singularity Linux container platform optimized for High Performance Computing (HPC) and Enterprise Performance Computing (EPC) Usage: singularity [global options...] The same is under the root. What might be the problem? |
Send message Joined: 15 Jun 08 Posts: 2488 Credit: 247,475,842 RAC: 120,938 |
Did you follow gyllic's guide? Be aware that ATLAS native is still beta and depending on your local environment it can be a bit tricky to get it running. In addition you may consider to make your hosts visible for other volunteers as the logs sometimes contain important hints: https://lhcathome.cern.ch/lhcathome/prefs.php?subset=project |
Send message Joined: 26 Jul 12 Posts: 18 Credit: 2,456,826 RAC: 0 |
Did you follow gyllic's guide? Thank you. Yes, I did follow this guide step by step - good guide, BTW. I made my computers visible so other volunteers. Hope it helps to understand the root cause of my issue. |
Send message Joined: 15 Jun 08 Posts: 2488 Credit: 247,475,842 RAC: 120,938 |
Please run "cvmfs_config probe". If this fails try a "cvmfs_config wipecache" and again a "cvmfs_config probe". If it still fails run "cvmfs_config showconfig -s atlas.cern.ch" and post the output here. If the cvmfs checks succeed it's most likely a singularity issue. I suggest to open a new thread for that. |
Send message Joined: 16 Jul 05 Posts: 24 Credit: 35,251,537 RAC: 0 |
I use singularity exec -B /cvmfs /cvmfs/atlas.cern.ch/repo/images/singularity/x86_64-slc6.img hostname to test singularity (this also depends on CVMFS working, of course). I install singularity from RPMs and haven't seen your issue; I would guess that you have installed singularity somewhere non-standard and so need to restart the boinc client and make sure it's picking the path up correctly. Else if you've a deeper problem then please start a new thread so others with the same problem can find it. I'm trying to run Atlas native application however I can see the following error: |
Send message Joined: 5 Nov 15 Posts: 144 Credit: 6,301,268 RAC: 0 |
From the Cloudflare http://OpenHTC.io website: "ask computing sites with many clients to supply their own caching proxy servers.". Any idea what number they assign to 'many'? 8, maybe 15 but certainly not 30 per individual IP address? Is it measured per project? Would they start objecting once BOINC volunteers break a 1000 clients all doing ATLAS even though each individual IP only has 4 to 6 clients? |
Send message Joined: 15 Jun 08 Posts: 2488 Credit: 247,475,842 RAC: 120,938 |
Don't worry. It's written on the same page: We [=CERN/Fermilab] have discussed it with them [=Cloudflare] and they said we’re not likely to cause problems if we keep the bandwidth low, so we plan to use it mostly for reducing latencies of applications with many small objects ... ATM there are only 2 LHC@home apps that (may) make use of openhtc.io, the recent Theory app and ATLAS native. ATLAS native only if it is configured to use openhtc.io, thus the "(may)". ATLAS vbox is currently not configured to use openhtc.io. ATLAS native shouldn't be a problem for the following reasons: 1. The huge onetimers like EVNT/HITS-files are always transferred directly via lhcathome-upload.cern.ch. 2. If your local CVMFS client is configured to use at least 4 GB cache, it would serve 99% of the requests without internet access. Theory causes a bit more requests, but the file sizes are much smaller and the requests are spread among lots of proxy machines that are part of the Cloudflare network. Thus an arbitrary proxy machine has very little work regarding LHC@home. Last but not least you may set up your own local proxy. A working basic configuration can be found here. |
Send message Joined: 15 Jun 08 Posts: 2488 Credit: 247,475,842 RAC: 120,938 |
Volunteers running ATLAS native and/or Theory native are kindly asked to update their local CVMFS configuration. Add the following parameter to /etc/cvmfs/default.local. CVMFS_SEND_INFO_HEADER=yes Create/Update the server list in /etc/cvmfs/domain.d/cern.ch.local. Be aware that quoting is a must due to the ";" in the server list. CVMFS_SERVER_URL="http://s1cern-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1ral-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1bnl-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1fnal-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1unl-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1asgc-cvmfs.openhtc.io:8080/cvmfs/@fqrn@;http://s1ihep-cvmfs.openhtc.io/cvmfs/@fqrn@" Activate the changes as root cvmfs_config reload Both settings help Cloudflare to optimize the traffic between their proxies and CERN's stratum one servers, especially if the Cloudflare proxies encounter a CACHE_MISS. Volunteers running VirtualBox VMs get the new settings automatically during the boot process. |
Send message Joined: 7 Apr 11 Posts: 4 Credit: 56,682,229 RAC: 0 |
Volunteers running ATLAS native and/or Theory native are kindly asked to update their local CVMFS configuration. I find a CVMFS_SERVER_URL in /etc/cvmfs/domain.d/cern.ch.conf. Would updating the one found there be appropriate? Or does it need to go in the .local file instead as indicated? And .local will override .conf, so we wouldn't need to remove the line from .conf, correct? Also, this is more important for users not using a local proxy server, correct? Also, has this been updated in the initial files indicated for a fresh install? So, things like the Ubuntu Repository files and the "master" copy of the default.local file downloaded from https://lhcathome.cern.ch/lhcathome/download/default.local |
Send message Joined: 15 Jun 08 Posts: 2488 Credit: 247,475,842 RAC: 120,938 |
I find a CVMFS_SERVER_URL in /etc/cvmfs/domain.d/cern.ch.conf. Would updating the one found there be appropriate? Or does it need to go in the .local file instead as indicated? And .local will override .conf, so we wouldn't need to remove the line from .conf, correct? Right. *.conf files should be left untouched. If you need local settings, place them in the corresponding *.local files. Also, this is more important for users not using a local proxy server, correct? It's independent from the local proxy setting. CERN runs only a couple of stratum one servers spread around the world. Cloudflare runs hundreds of proxies and one of them will most likely be much closer to your computer's location than the closest stratum one. This is why LHC@Home volunteers should configure *.openhtc.io. A local proxy would support both configurations and avoids lots of traffic between your LAN and the internet. Also, has this been updated in the initial files indicated for a fresh install? So, things like the Ubuntu Repository files ... The basic configuration is made for lots of different users and sites, not only for LHC@Home volunteers. Cloudflare's openhtc.io is mainly thought to be used by LHC@Home volunteers. Hence the installation packets might not be updated. ... and the "master" copy of the default.local file downloaded from https://lhcathome.cern.ch/lhcathome/download/default.local Good hint. This should indeed be updated. |
Send message Joined: 12 Jun 18 Posts: 126 Credit: 53,906,164 RAC: 0 |
I'm trying to figure out why I'm seeing so many failed ATLAS WUs. I did update: sudo xed /etc/cvmfs/default.local CVMFS_SEND_INFO_HEADER=yes sudo xed /etc/cvmfs/domain.d/cern.ch.local CVMFS_SERVER_URL="http://s1cern-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1ral-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1bnl-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1fnal-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1unl-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1asgc-cvmfs.openhtc.io:8080/cvmfs/@fqrn@;http://s1ihep-cvmfs.openhtc.io/cvmfs/@fqrn@" sudo cvmfs_config reloadYet I went back and checked and found that some computers lack the line: CVMFS_SEND_INFO_HEADER=yesIs there anything that might overwrite that file??? Is this update configuring for openhtc.io or is there something else I need to do??? |
Send message Joined: 15 Jun 08 Posts: 2488 Credit: 247,475,842 RAC: 120,938 |
Theory native requires CVMFS but ATLAS native also requires Singularity. At least this host is missing Singularity. Hence all ATLAS tasks fail. https://lhcathome.cern.ch/lhcathome/results.php?hostid=10603415&offset=0&show_names=0&state=0&appid=14 Regarding the missing config line: Are you sure you saved the changes? Are you sure you checked the right host? Additional comments: During the last few days roughly 800-900 cores from your cluster requested work from LHC@home. This is really very nice, but: 1. To feed this huge number of cores a local proxy is a must. Do you have one? 2. You may keep an eye on your upload direction. It's hopefully not close to saturation. |
Send message Joined: 12 Jun 18 Posts: 126 Credit: 53,906,164 RAC: 0 |
I checked Rig-13 that you linked and Singularity 2.6 is there in its folder in my home directory. I figured out how that line "CVMFS_SEND_INFO_HEADER=yes" vanished. On Rig-13 I checked it first and it was there. I ran: sudo cvmfs_config reloadand it listed no actions. It makes no sense why the error says Singularity is not found so the only thing I know to do is reinstall all the stuff: wget https://github.com/singularityware/singularity/releases/download/2.6.0/singularity-2.6.0.tar.gz ; tar xvf singularity-2.6.0.tar.gz ; cd singularity-2.6.0 ; sudo apt install libarchive-dev ; ./configure --prefix=/usr/local ; make ; sudo make install wget https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest_all.deb ; sudo dpkg -i cvmfs-release-latest_all.deb ; rm -f cvmfs-release-latest_all.deb ; sudo apt-get update ; sudo apt-get install cvmfs ; sudo apt install glibc-doc open-iscsi watchdog sudo wget https://lhcathomedev.cern.ch/lhcathome-dev/download/default.local -O /etc/cvmfs/default.local ; sudo cvmfs_config setup ; sudo echo "/cvmfs /etc/auto.cvmfs" > /etc/auto.master.d/cvmfs.autofs ; sudo systemctl restart autofs ; cvmfs_config probe Then I checked the first "update" file: sudo xed /etc/cvmfs/default.localand CVMFS_SEND_INFO_HEADER=yes was missing. Since that file was created by the installation sequence it overwrote it. The second "update" file was only created by: sudo xed /etc/cvmfs/domain.d/cern.ch.localso it persisted across the reinstall. Then I ran: sudo cvmfs_config reloadand got a long list of actions. I don't know how to setup a local proxy. I read the SQUID link and it's outside my wheelhouse. I see "openhtc.io" in the title of this thread but don't know how to do it unless the sequence of commands above do it. I have no complaint about ULing and DLing files from CERN. Depending on how I switch to ATLAS I may have ~240 WUs DLing and each is ~365 MB so it slows down.[/code] |
Send message Joined: 13 Apr 18 Posts: 443 Credit: 8,438,885 RAC: 0 |
I don't know how to setup a local proxy. I read the SQUID link and it's outside my wheelhouse. I see "openhtc.io" in the title of this thread but don't know how to do it unless the sequence of commands above do it. The instructions for Squid are a bit confusing but you've got the CVMFS, the Singularity, you're over the hard part. Ask in the squid thread and I'll give you a working example. With 240 WUs DLing.... I just can't see it working nicely without a Squid. I can't even see it working badly unless you have outrageously fast connection and unlimited data. |
Send message Joined: 15 Jun 08 Posts: 2488 Credit: 247,475,842 RAC: 120,938 |
A new version of the HowTo can be found here: https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5594 New comments and questions should be posted here: https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5595 |
©2024 CERN