Message boards :
ATLAS application :
extraction failed: could not extract squashfs data, unsquashfs not found
Message board moderation
Author | Message |
---|---|
Send message Joined: 17 Feb 17 Posts: 42 Credit: 2,589,736 RAC: 0 ![]() ![]() |
Hello Back after a long while and figured I'd start off small with just an old dual Opteron 6128. Boinc 7.16.6, Ubuntu 20.04, virtual box 6.1.16, and to my knowledge cvmfs working (at least the probe worked just fine). I am a complete newbie at this, so any help is appreciated. The following task errored out. The machine is set to use 4 CPU cores per task, has plenty of ram (32 GB). https://lhcathome.cern.ch/lhcathome/result.php?resultid=302706770 If any more info is needed please let me know. Any help appreciated. Edit: Well, used apt-get to install and now we're crunching just fine. I don't recall this happening before. Does this not get installed alongside anymore? Making a checklist for this project so this is good to know. For reference, sudo apt-get install squashfs-tools |
![]() Send message Joined: 15 Jun 08 Posts: 2246 Credit: 199,232,774 RAC: 128,164 ![]() ![]() ![]() |
Any help appreciated. Good to see that you got it running. Nonetheless you may check your CVMFS setup. It connects via a fallback proxy at Fermilab (131.225.188.245). To get out why be so kind as to post the output of the following command: cvmfs_config showconfig atlas.cern.ch |grep -E 'FALLBACK_PROXY|HTTP_PROXY|USE_CDN' |
Send message Joined: 2 May 07 Posts: 1835 Credit: 140,189,514 RAC: 131,749 ![]() ![]() ![]() |
squashfs is needed for Atlas in Linux. Have it installed also in CentOS8 and CentOS7. There is a message from David in the Atlas thread. |
Send message Joined: 17 Feb 17 Posts: 42 Credit: 2,589,736 RAC: 0 ![]() ![]() |
Any help appreciated. This is the output. 'FALLBACK_PROXY|HTTP_PROXY|USE_CDN' CVMFS_EXTERNAL_FALLBACK_PROXY= CVMFS_EXTERNAL_HTTP_PROXY= CVMFS_FALLBACK_PROXY='http://cvmfsbproxy.cern.ch:3126;http://cvmfsbproxy.fnal.gov:3126' # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf CVMFS_HTTP_PROXY='http://squid:3128;DIRECT' # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf CVMFS_USE_CDN=yes # from /etc/cvmfs/default.local For reference, I followed this guide. https://cvmfs.readthedocs.io/en/stable/cpt-quickstart.html Followed by https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4971 However, while this method has always been tried and true in the past, I got immediate failures when running 'cvmfs_config probe'. So then it was on over to this thread https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5594 I copied the default.local outlined in this thread and now things seem to be going fairly okay, I think. For what it's worth, I feel like at this point I'm one of those monkeys trying to write Shakespeare, pounding keys and hoping for a miracle. I quite literally have no idea what is making things work behind the scenes or what my part in it is (CVMFS and native theory specifically). Thank you for everyone who have taken the time to pile all these resources together. Now I just need to figure out how to optimize my setup, perhaps. |
![]() Send message Joined: 15 Jun 08 Posts: 2246 Credit: 199,232,774 RAC: 128,164 ![]() ![]() ![]() |
The fallback proxies are configured by a script that is located on CERN's online repository cvmfs-config.cern.ch. This is done because your defaul.local defines a squid that can't be accessed or does not even exist. It looks like you simply copied the default.local file from the forum thread and did not read the comments inside. There are 2 possible solutions: 1. To setup a local proxy and replace "squid" with it's hostname 2. To remove the "#" in front of CVMFS_HTTP_PROXY="auto;DIRECT" (1.) would be the preferred method for clusters/single computers providing more than 5 worker nodes (2.) is the simple solution The limit of 5 should be seen as a magnitude rather than a sharp limit. Don't forget an "[sudo] cvmfs_config reload" after you saved the changes. <edit> Sorry, checked the wrong logfile. Your recent ones show that you are already running a proxy called "squid". That's fine. Leave it this way. [2021-02-25 18:47:47] 2.8.0.0 2129 1201 59336 79744 0 78 1907592 4096000 1399 130560 0 386235 98.3608 875279 454 http://s1fnal-cvmfs.openhtc.io/cvmfs/atlas.cern.ch http://squid:3128 0 </edit> |
Send message Joined: 2 May 07 Posts: 1835 Credit: 140,189,514 RAC: 131,749 ![]() ![]() ![]() |
This is the message from David: https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5151&postid=40049#40049 |
Send message Joined: 17 Feb 17 Posts: 42 Credit: 2,589,736 RAC: 0 ![]() ![]() |
The fallback proxies are configured by a script that is located on CERN's online repository cvmfs-config.cern.ch. Both of these machines are running at different locations, so are single, and I do not believe require a local proxy. The Opterons should really be retired soon. They are currently my space heater, but 2.0 ghz and over 130 plus w each is a little intense on the electric bill when there are Ryzen 3's that can outdo it these days. |
©2023 CERN