Message boards :
ATLAS application :
error on Atlas native: 195 (0x000000C3) EXIT_CHILD_FAILED
Message board moderation
Previous · 1 · 2 · 3
Author | Message |
---|---|
Send message Joined: 15 Nov 14 Posts: 602 Credit: 24,371,321 RAC: 0 |
It works for me on Ubuntu 20.04.1 for the "Download and install Singularity from a release" as described here: https://sylabs.io/guides/3.0/user-guide/installation.html I have run a couple of native ATLAS with it. |
Send message Joined: 9 Jan 15 Posts: 151 Credit: 431,596,822 RAC: 0 |
Same issue as before, no changes yet. As mention it would work if you install any version of singularity. This would probably make 20.04 a n odd system do deal with regarding permissions to pre-build singularity container. Tested it last yesterday with 3.6.2. Latest from release from github. |
Send message Joined: 30 Aug 14 Posts: 145 Credit: 10,847,070 RAC: 0 |
Thanks for the update. Good to know that it works with singularity installed separately, but i'd like ATLAS tasks to work without a local installation of singularity. I still wonder what the cause of the permission problem is, since it appears not only on one distribution but at least on two (latest Ubuntu and CentOS). Those two releases don't even use the same kernel, which makes the permission problem even more mysterious... Why mine when you can research? - GRIDCOIN - Real cryptocurrency without wasting hashes! https://gridcoin.us |
Send message Joined: 15 Nov 14 Posts: 602 Credit: 24,371,321 RAC: 0 |
Thanks for the update. Good to know that it works with singularity installed separately, but i'd like ATLAS tasks to work without a local installation of singularity. I have been wondering that myself. It used to work with just CVMFS, using it's own version of singularity. But I have given up hope for that, and am just glad that I have found a procedure that works reliably for installing singularity by itself. It may have to do with how you install BOINC though. I suspect a lot of the advanced Linux users compile their own and install it into the Home folder, where the permissions problem goes away. I use the LocutusofBorg version of BOINC, which makes me have to grant additional permissions upfront, but it is a lot easier to upgrade. YMMV. |
Send message Joined: 21 Jan 08 Posts: 1 Credit: 14,617,525 RAC: 0 |
I came across this thread because I had the same remount error of /var in the log. Fixed it once by compiling Singularity, fixed it the other times by getting the Ubuntu 20.04 from their repo: wget https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest_all.deb sudo dpkg -i cvmfs-release-latest_all.deb sudo apt-get update sudo apt-get install cvmfs cvmfs-config-default Alternatively latest cvmfs for a bunch of destros is available here: cvmfs (Client) manuell von http://cernvm.cern.ch/portal/filesystem/downloads installiert |
Send message Joined: 2 May 07 Posts: 2197 Credit: 173,398,042 RAC: 44,335 |
The Computer crashed unexpectedly, not from Atlas-native as a VM. After restarted, this task ended with called boinc_finish(195) https://lhcathome.cern.ch/lhcathome/result.php?resultid=295940888 Using CentOS7 as a VM. [2021-01-18 12:42:30] Using singularity image /cvmfs/atlas.cern.ch/repo/containers/images/singularity/x86_64-centos7.img [2021-01-18 12:42:30] Checking for singularity binary... [2021-01-18 12:42:30] which: no singularity in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin) [2021-01-18 12:42:30] Singularity is not installed, using version from CVMFS [2021-01-18 12:42:30] Checking singularity works with /cvmfs/atlas.cern.ch/repo/containers/sw/singularity/x86_64-el7/current/bin/singularity exec -B /cvmfs /cvmfs/atlas.cern.ch/repo/containers/images/singularity/x86_64-centos7.img hostname [2021-01-18 12:42:44] Singularity isnt working: [34mINFO: [0m Convert SIF file to sandbox... [2021-01-18 12:42:44] [31mFATAL: [0m while opening capability config file: open /cvmfs/atlas.cern.ch/repo/containers/sw/singularity/x86_64-el7/3.2.1/etc/singularity/capability.json: input/output error [2021-01-18 12:42:44] ./runtime_log [2021-01-18 12:42:44] ./runtime_log.err [2021-01-18 12:42:44] ./log.23766992._041786.job.log.1 [2021-01-18 12:42:44] ./pilotlog.txt 12:52:44 (2081): run_atlas exited; CPU time 9.624872 12:52:44 (2081): app exit status: 0x1 12:52:44 (2081): called boinc_finish(195) Have now installed singularity on this VM. No new task started so long. This task finished correct from Agile Boincers. https://lhcathome.cern.ch/lhcathome/workunit.php?wuid=152127019 |
Send message Joined: 8 May 17 Posts: 13 Credit: 40,377,570 RAC: 5,079 |
Can confirm that with a self built version of Singularity (version 3.7.1) ATLAS works correctly on a fresh Ubuntu 20.04. A result as example: https://lhcathome.cern.ch/lhcathome/result.php?resultid=297199728 |
Send message Joined: 15 Jun 08 Posts: 2509 Credit: 249,194,699 RAC: 127,235 |
Right. The task returned a HITS file which is a success. Nonetheless you may configure your CVMFS client to use openhtc.io instead of cvmfs-stratum-one.cern.ch. Could you please try if the following new setting in your /etc/cvmfs/default.local works? CVMFS_USE_CDN=yes Then run "cvmfs_config reload" as root. Since your computer list shows much more than a total of 10 cores it is also suggested that you run a local squid. See: https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473 |
Send message Joined: 8 May 17 Posts: 13 Credit: 40,377,570 RAC: 5,079 |
On this test machine, configuration is now as follows: $ cat /etc/cvmfs/default.local CVMFS_USE_CDN=yes # BEGIN ANSIBLE MANAGED BLOCK CVMFS_REPOSITORIES=atlas.cern.ch,atlas-condb.cern.ch,grid.cern.ch,cernvm-prod.cern.ch,sft.cern.ch,alice.cern.ch CVMFS_QUOTA_LIMIT=4096 CVMFS_CACHE_BASE=/var/lib/cvmfs CVMFS_HTTP_PROXY=DIRECT CVMFS_SEND_INFO_HEADER=yes # END ANSIBLE MANAGED BLOCK After a while I ran $ sudo cvmfs_config stat Running /usr/bin/cvmfs_config stat cvmfs-config.cern.ch: VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE 2.7.5.0 1596 22 23752 14 2 1 3189293 4194304 0 65024 0 194 100 3 1 http://s1cern-cvmfs.openhtc.io/cvmfs/cvmfs-config.cern.ch DIRECT 1 Running /usr/bin/cvmfs_config stat atlas.cern.ch: VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE 2.7.5.0 1687 22 36640 78110 2 75 3189293 4194304 559 65024 0 10889 99.8898 278 330 http://s1cern-cvmfs.openhtc.io/cvmfs/atlas.cern.ch DIRECT 1 Running /usr/bin/cvmfs_config stat atlas-condb.cern.ch: VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE 2.7.5.0 1811 22 24372 8534 2 1 3189293 4194304 0 65024 0 13 92.3077 66 102 http://s1cern-cvmfs.openhtc.io/cvmfs/atlas-condb.cern.ch DIRECT 1 Running /usr/bin/cvmfs_config stat sft.cern.ch: VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE 2.7.5.0 2186 21 24976 19547 2 5 3189293 4194304 7 65024 0 209 99.5215 3 5 http://s1cern-cvmfs.openhtc.io/cvmfs/sft.cern.ch DIRECT 1 Seems to be using openhtc.io if I read this right. and yes installing a proxy is on my todo list :-) |
Send message Joined: 15 Jun 08 Posts: 2509 Credit: 249,194,699 RAC: 127,235 |
2.7.5.0 1596 22 23752 14 2 1 3189293 4194304 0 65024 0 194 100 3 1 http://s1cern-cvmfs.openhtc.io/cvmfs/cvmfs-config.cern.ch DIRECT 1 Perfect. This is how it should look like. Thanks for the reply, it's very helpful. ...and yes installing a proxy is on my todo list :-) +1 |
Send message Joined: 8 May 17 Posts: 13 Credit: 40,377,570 RAC: 5,079 |
...and yes installing a proxy is on my todo list :-) Getting off topic but while we're on the subject of configuring a proxy, is CVMFS cache (ie CVMFS_QUOTA_LIMIT=4096) still required in the case of going through a caching proxy ? |
Send message Joined: 15 Jun 08 Posts: 2509 Credit: 249,194,699 RAC: 127,235 |
The updated HowTo may answer your question: https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5594 |
Send message Joined: 8 May 17 Posts: 13 Credit: 40,377,570 RAC: 5,079 |
Thank you, will read and adjust my configs accordingly. |
©2024 CERN