1) Message boards : Theory Application : Probing /cvmfs/sft.cern.ch... Failed! (Message 45042)
Posted 30 May 2021 by vigilian
Post:
Running /usr/bin/cvmfs_config -s atlas.cern.ch:
CVMFS_REPOSITORY_NAME=atlas.cern.ch
CVMFS_BACKOFF_INIT=2    # from /etc/cvmfs/default.conf
CVMFS_BACKOFF_MAX=10    # from /etc/cvmfs/default.conf
CVMFS_BASE_ENV=1    # from /etc/cvmfs/default.conf
CVMFS_CACHE_BASE=/var/lib/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_CACHE_DIR=/var/lib/cvmfs/shared
CVMFS_CHECK_PERMISSIONS=yes    # from /etc/cvmfs/default.conf
CVMFS_CLAIM_OWNERSHIP=yes    # from /etc/cvmfs/default.conf
CVMFS_CLIENT_PROFILE=single    # from /etc/cvmfs/default.local
CVMFS_CONFIG_REPO_DEFAULT_ENV=1    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_CONFIG_REPOSITORY=cvmfs-config.cern.ch    # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_DEFAULT_DOMAIN=cern.ch    # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_FALLBACK_PROXY=    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_HOST_RESET_AFTER=1800    # from /etc/cvmfs/default.conf
CVMFS_HTTP_PROXY=DIRECT    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_KCACHE_TIMEOUT=2    # from /etc/cvmfs/default.local
CVMFS_KEYS_DIR=/cvmfs/cvmfs-config.cern.ch/etc/cvmfs/keys/cern.ch    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_LOW_SPEED_LIMIT=1024    # from /etc/cvmfs/default.conf
CVMFS_MAX_RETRIES=3    # from /etc/cvmfs/default.local
CVMFS_MOUNT_DIR=/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_NFILES=131072    # from /etc/cvmfs/default.conf
CVMFS_PAC_URLS='http://grid-wpad/wpad.dat;http://wpad/wpad.dat;http://cernvm-wpad.cern.ch/wpad.dat;http://cernvm-wpad.fnal.gov/wpad.dat'    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_PROXY_RESET_AFTER=300    # from /etc/cvmfs/default.conf
CVMFS_QUOTA_LIMIT=4000    # from /etc/cvmfs/default.conf
CVMFS_RELOAD_SOCKETS=/var/run/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_REPOSITORIES=atlas.cern.ch,atlas-condb.cern.ch,lhcb.cern.ch,alice.cern.ch,alice-ocdb.cern.ch,grid.cern.ch,cms.cern.ch,sft.cern.ch,geant4.cern.ch,na61.cern.ch,boss.cern.ch    # from /etc/cvmfs/default.local
CVMFS_SEND_INFO_HEADER=yes    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_SERVER_URL='http://cvmfs-stratum-one.cern.ch:8000/cvmfs/atlas.cern.ch;http://cernvmfs.gridpp.rl.ac.uk:8000/cvmfs/atlas.cern.ch;http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/atlas.cern.ch;http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/atlas.cern.ch;http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/atlas.cern.ch;http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/atlas.cern.ch'    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_SHARED_CACHE=yes    # from /etc/cvmfs/default.conf
CVMFS_STRICT_MOUNT=no    # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT=5    # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT_DIRECT=10    # from /etc/cvmfs/default.conf
CVMFS_USE_GEOAPI=yes    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_USER=cvmfs    # from /etc/cvmfs/default.conf

Running /usr/bin/cvmfs_config -s atlas-condb.cern.ch:
CVMFS_REPOSITORY_NAME=atlas-condb.cern.ch
CVMFS_BACKOFF_INIT=2    # from /etc/cvmfs/default.conf
CVMFS_BACKOFF_MAX=10    # from /etc/cvmfs/default.conf
CVMFS_BASE_ENV=1    # from /etc/cvmfs/default.conf
CVMFS_CACHE_BASE=/var/lib/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_CACHE_DIR=/var/lib/cvmfs/shared
CVMFS_CHECK_PERMISSIONS=yes    # from /etc/cvmfs/default.conf
CVMFS_CLAIM_OWNERSHIP=yes    # from /etc/cvmfs/default.conf
CVMFS_CLIENT_PROFILE=single    # from /etc/cvmfs/default.local
CVMFS_CONFIG_REPO_DEFAULT_ENV=1    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_CONFIG_REPOSITORY=cvmfs-config.cern.ch    # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_DEFAULT_DOMAIN=cern.ch    # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_FALLBACK_PROXY=    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_HOST_RESET_AFTER=1800    # from /etc/cvmfs/default.conf
CVMFS_HTTP_PROXY=DIRECT    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_KCACHE_TIMEOUT=2    # from /etc/cvmfs/default.local
CVMFS_KEYS_DIR=/cvmfs/cvmfs-config.cern.ch/etc/cvmfs/keys/cern.ch    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_LOW_SPEED_LIMIT=1024    # from /etc/cvmfs/default.conf
CVMFS_MAX_RETRIES=3    # from /etc/cvmfs/default.local
CVMFS_MOUNT_DIR=/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_NFILES=131072    # from /etc/cvmfs/default.conf
CVMFS_PAC_URLS='http://grid-wpad/wpad.dat;http://wpad/wpad.dat;http://cernvm-wpad.cern.ch/wpad.dat;http://cernvm-wpad.fnal.gov/wpad.dat'    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_PROXY_RESET_AFTER=300    # from /etc/cvmfs/default.conf
CVMFS_QUOTA_LIMIT=4000    # from /etc/cvmfs/default.conf
CVMFS_RELOAD_SOCKETS=/var/run/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_REPOSITORIES=atlas.cern.ch,atlas-condb.cern.ch,lhcb.cern.ch,alice.cern.ch,alice-ocdb.cern.ch,grid.cern.ch,cms.cern.ch,sft.cern.ch,geant4.cern.ch,na61.cern.ch,boss.cern.ch    # from /etc/cvmfs/default.local
CVMFS_SEND_INFO_HEADER=yes    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_SERVER_URL='http://cvmfs-stratum-one.cern.ch:8000/cvmfs/atlas-condb.cern.ch;http://cernvmfs.gridpp.rl.ac.uk:8000/cvmfs/atlas-condb.cern.ch;http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/atlas-condb.cern.ch;http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/atlas-condb.cern.ch;http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/atlas-condb.cern.ch;http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/atlas-condb.cern.ch'    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_SHARED_CACHE=yes    # from /etc/cvmfs/default.conf
CVMFS_STRICT_MOUNT=no    # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT=5    # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT_DIRECT=10    # from /etc/cvmfs/default.conf
CVMFS_USE_GEOAPI=yes    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_USER=cvmfs    # from /etc/cvmfs/default.conf

Running /usr/bin/cvmfs_config -s lhcb.cern.ch:
CVMFS_REPOSITORY_NAME=lhcb.cern.ch
CVMFS_BACKOFF_INIT=2    # from /etc/cvmfs/default.conf
CVMFS_BACKOFF_MAX=10    # from /etc/cvmfs/default.conf
CVMFS_BASE_ENV=1    # from /etc/cvmfs/default.conf
CVMFS_CACHE_BASE=/var/lib/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_CACHE_DIR=/var/lib/cvmfs/shared
CVMFS_CHECK_PERMISSIONS=yes    # from /etc/cvmfs/default.conf
CVMFS_CLAIM_OWNERSHIP=yes    # from /etc/cvmfs/default.conf
CVMFS_CLIENT_PROFILE=single    # from /etc/cvmfs/default.local
CVMFS_CONFIG_REPO_DEFAULT_ENV=1    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_CONFIG_REPOSITORY=cvmfs-config.cern.ch    # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_DEFAULT_DOMAIN=cern.ch    # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_FALLBACK_PROXY=    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_HOST_RESET_AFTER=1800    # from /etc/cvmfs/default.conf
CVMFS_HTTP_PROXY=DIRECT    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_KCACHE_TIMEOUT=2    # from /etc/cvmfs/default.local
CVMFS_KEYS_DIR=/cvmfs/cvmfs-config.cern.ch/etc/cvmfs/keys/cern.ch    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_LOW_SPEED_LIMIT=1024    # from /etc/cvmfs/default.conf
CVMFS_MAX_RETRIES=3    # from /etc/cvmfs/default.local
CVMFS_MOUNT_DIR=/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_NFILES=131072    # from /etc/cvmfs/default.conf
CVMFS_PAC_URLS='http://grid-wpad/wpad.dat;http://wpad/wpad.dat;http://cernvm-wpad.cern.ch/wpad.dat;http://cernvm-wpad.fnal.gov/wpad.dat'    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_PROXY_RESET_AFTER=300    # from /etc/cvmfs/default.conf
CVMFS_QUOTA_LIMIT=4000    # from /etc/cvmfs/default.conf
CVMFS_RELOAD_SOCKETS=/var/run/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_REPOSITORIES=atlas.cern.ch,atlas-condb.cern.ch,lhcb.cern.ch,alice.cern.ch,alice-ocdb.cern.ch,grid.cern.ch,cms.cern.ch,sft.cern.ch,geant4.cern.ch,na61.cern.ch,boss.cern.ch    # from /etc/cvmfs/default.local
CVMFS_SEND_INFO_HEADER=yes    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_SERVER_URL='http://cvmfs-stratum-one.cern.ch:8000/cvmfs/lhcb.cern.ch;http://cernvmfs.gridpp.rl.ac.uk:8000/cvmfs/lhcb.cern.ch;http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/lhcb.cern.ch;http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/lhcb.cern.ch;http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/lhcb.cern.ch;http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/lhcb.cern.ch'    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_SHARED_CACHE=yes    # from /etc/cvmfs/default.conf
CVMFS_STRICT_MOUNT=no    # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT=5    # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT_DIRECT=10    # from /etc/cvmfs/default.conf
CVMFS_USE_GEOAPI=yes    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_USER=cvmfs    # from /etc/cvmfs/default.conf

Running /usr/bin/cvmfs_config -s alice.cern.ch:
CVMFS_REPOSITORY_NAME=alice.cern.ch
CVMFS_BACKOFF_INIT=2    # from /etc/cvmfs/default.conf
CVMFS_BACKOFF_MAX=10    # from /etc/cvmfs/default.conf
CVMFS_BASE_ENV=1    # from /etc/cvmfs/default.conf
CVMFS_CACHE_BASE=/var/lib/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_CACHE_DIR=/var/lib/cvmfs/shared
CVMFS_CHECK_PERMISSIONS=yes    # from /etc/cvmfs/default.conf
CVMFS_CLAIM_OWNERSHIP=yes    # from /etc/cvmfs/default.conf
CVMFS_CLIENT_PROFILE=single    # from /etc/cvmfs/default.local
CVMFS_CONFIG_REPO_DEFAULT_ENV=1    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_CONFIG_REPOSITORY=cvmfs-config.cern.ch    # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_DEFAULT_DOMAIN=cern.ch    # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_FALLBACK_PROXY=    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_HOST_RESET_AFTER=1800    # from /etc/cvmfs/default.conf
CVMFS_HTTP_PROXY=DIRECT    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_KCACHE_TIMEOUT=2    # from /etc/cvmfs/default.local
CVMFS_KEYS_DIR=/cvmfs/cvmfs-config.cern.ch/etc/cvmfs/keys/cern.ch    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_LOW_SPEED_LIMIT=1024    # from /etc/cvmfs/default.conf
CVMFS_MAX_RETRIES=3    # from /etc/cvmfs/default.local
CVMFS_MOUNT_DIR=/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_NFILES=131072    # from /etc/cvmfs/default.conf
CVMFS_PAC_URLS='http://grid-wpad/wpad.dat;http://wpad/wpad.dat;http://cernvm-wpad.cern.ch/wpad.dat;http://cernvm-wpad.fnal.gov/wpad.dat'    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_PROXY_RESET_AFTER=300    # from /etc/cvmfs/default.conf
CVMFS_QUOTA_LIMIT=4000    # from /etc/cvmfs/default.conf
CVMFS_RELOAD_SOCKETS=/var/run/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_REPOSITORIES=atlas.cern.ch,atlas-condb.cern.ch,lhcb.cern.ch,alice.cern.ch,alice-ocdb.cern.ch,grid.cern.ch,cms.cern.ch,sft.cern.ch,geant4.cern.ch,na61.cern.ch,boss.cern.ch    # from /etc/cvmfs/default.local
CVMFS_SEND_INFO_HEADER=yes    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_SERVER_URL='http://cvmfs-stratum-one.cern.ch:8000/cvmfs/alice.cern.ch;http://cernvmfs.gridpp.rl.ac.uk:8000/cvmfs/alice.cern.ch;http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/alice.cern.ch;http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/alice.cern.ch;http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/alice.cern.ch;http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/alice.cern.ch'    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_SHARED_CACHE=yes    # from /etc/cvmfs/default.conf
CVMFS_STRICT_MOUNT=no    # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT=5    # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT_DIRECT=10    # from /etc/cvmfs/default.conf
CVMFS_USE_GEOAPI=yes    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_USER=cvmfs    # from /etc/cvmfs/default.conf

Running /usr/bin/cvmfs_config -s alice-ocdb.cern.ch:
CVMFS_REPOSITORY_NAME=alice-ocdb.cern.ch
CVMFS_BACKOFF_INIT=2    # from /etc/cvmfs/default.conf
CVMFS_BACKOFF_MAX=10    # from /etc/cvmfs/default.conf
CVMFS_BASE_ENV=1    # from /etc/cvmfs/default.conf
CVMFS_CACHE_BASE=/var/lib/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_CACHE_DIR=/var/lib/cvmfs/shared
CVMFS_CHECK_PERMISSIONS=yes    # from /etc/cvmfs/default.conf
CVMFS_CLAIM_OWNERSHIP=yes    # from /etc/cvmfs/default.conf
CVMFS_CLIENT_PROFILE=single    # from /etc/cvmfs/default.local
CVMFS_CONFIG_REPO_DEFAULT_ENV=1    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_CONFIG_REPOSITORY=cvmfs-config.cern.ch    # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_DEFAULT_DOMAIN=cern.ch    # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_FALLBACK_PROXY=    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_HOST_RESET_AFTER=1800    # from /etc/cvmfs/default.conf
CVMFS_HTTP_PROXY=DIRECT    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_KCACHE_TIMEOUT=2    # from /etc/cvmfs/default.local
CVMFS_KEYS_DIR=/cvmfs/cvmfs-config.cern.ch/etc/cvmfs/keys/cern.ch    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_LOW_SPEED_LIMIT=1024    # from /etc/cvmfs/default.conf
CVMFS_MAX_RETRIES=3    # from /etc/cvmfs/default.local
CVMFS_MOUNT_DIR=/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_NFILES=131072    # from /etc/cvmfs/default.conf
CVMFS_PAC_URLS='http://grid-wpad/wpad.dat;http://wpad/wpad.dat;http://cernvm-wpad.cern.ch/wpad.dat;http://cernvm-wpad.fnal.gov/wpad.dat'    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_PROXY_RESET_AFTER=300    # from /etc/cvmfs/default.conf
CVMFS_QUOTA_LIMIT=4000    # from /etc/cvmfs/default.conf
CVMFS_RELOAD_SOCKETS=/var/run/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_REPOSITORIES=atlas.cern.ch,atlas-condb.cern.ch,lhcb.cern.ch,alice.cern.ch,alice-ocdb.cern.ch,grid.cern.ch,cms.cern.ch,sft.cern.ch,geant4.cern.ch,na61.cern.ch,boss.cern.ch    # from /etc/cvmfs/default.local
CVMFS_SEND_INFO_HEADER=yes    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_SERVER_URL='http://cvmfs-stratum-one.cern.ch:8000/cvmfs/alice-ocdb.cern.ch;http://cernvmfs.gridpp.rl.ac.uk:8000/cvmfs/alice-ocdb.cern.ch;http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/alice-ocdb.cern.ch;http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/alice-ocdb.cern.ch;http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/alice-ocdb.cern.ch;http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/alice-ocdb.cern.ch'    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_SHARED_CACHE=yes    # from /etc/cvmfs/default.conf
CVMFS_STRICT_MOUNT=no    # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT=5    # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT_DIRECT=10    # from /etc/cvmfs/default.conf
CVMFS_USE_GEOAPI=yes    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_USER=cvmfs    # from /etc/cvmfs/default.conf

Running /usr/bin/cvmfs_config -s grid.cern.ch:
CVMFS_REPOSITORY_NAME=grid.cern.ch
CVMFS_BACKOFF_INIT=2    # from /etc/cvmfs/default.conf
CVMFS_BACKOFF_MAX=10    # from /etc/cvmfs/default.conf
CVMFS_BASE_ENV=1    # from /etc/cvmfs/default.conf
CVMFS_CACHE_BASE=/var/lib/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_CACHE_DIR=/var/lib/cvmfs/shared
CVMFS_CHECK_PERMISSIONS=yes    # from /etc/cvmfs/default.conf
CVMFS_CLAIM_OWNERSHIP=yes    # from /etc/cvmfs/default.conf
CVMFS_CLIENT_PROFILE=single    # from /etc/cvmfs/default.local
CVMFS_CONFIG_REPO_DEFAULT_ENV=1    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_CONFIG_REPOSITORY=cvmfs-config.cern.ch    # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_DEFAULT_DOMAIN=cern.ch    # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_FALLBACK_PROXY=    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_HOST_RESET_AFTER=1800    # from /etc/cvmfs/default.conf
CVMFS_HTTP_PROXY=DIRECT    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_KCACHE_TIMEOUT=2    # from /etc/cvmfs/default.local
CVMFS_KEYS_DIR=/cvmfs/cvmfs-config.cern.ch/etc/cvmfs/keys/cern.ch    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_LOW_SPEED_LIMIT=1024    # from /etc/cvmfs/default.conf
CVMFS_MAX_RETRIES=3    # from /etc/cvmfs/default.local
CVMFS_MOUNT_DIR=/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_NFILES=131072    # from /etc/cvmfs/default.conf
CVMFS_PAC_URLS='http://grid-wpad/wpad.dat;http://wpad/wpad.dat;http://cernvm-wpad.cern.ch/wpad.dat;http://cernvm-wpad.fnal.gov/wpad.dat'    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_PROXY_RESET_AFTER=300    # from /etc/cvmfs/default.conf
CVMFS_QUOTA_LIMIT=4000    # from /etc/cvmfs/default.conf
CVMFS_RELOAD_SOCKETS=/var/run/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_REPOSITORIES=atlas.cern.ch,atlas-condb.cern.ch,lhcb.cern.ch,alice.cern.ch,alice-ocdb.cern.ch,grid.cern.ch,cms.cern.ch,sft.cern.ch,geant4.cern.ch,na61.cern.ch,boss.cern.ch    # from /etc/cvmfs/default.local
CVMFS_SEND_INFO_HEADER=yes    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_SERVER_URL='http://cvmfs-stratum-one.cern.ch:8000/cvmfs/grid.cern.ch;http://cernvmfs.gridpp.rl.ac.uk:8000/cvmfs/grid.cern.ch;http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/grid.cern.ch;http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/grid.cern.ch;http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/grid.cern.ch;http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/grid.cern.ch'    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_SHARED_CACHE=yes    # from /etc/cvmfs/default.conf
CVMFS_STRICT_MOUNT=no    # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT=5    # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT_DIRECT=10    # from /etc/cvmfs/default.conf
CVMFS_USE_GEOAPI=yes    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_USER=cvmfs    # from /etc/cvmfs/default.conf

Running /usr/bin/cvmfs_config -s cms.cern.ch:
CVMFS_REPOSITORY_NAME=cms.cern.ch
CVMFS_BACKOFF_INIT=2    # from /etc/cvmfs/default.conf
CVMFS_BACKOFF_MAX=10    # from /etc/cvmfs/default.conf
CVMFS_BASE_ENV=1    # from /etc/cvmfs/default.conf
CVMFS_CACHE_BASE=/var/lib/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_CACHE_DIR=/var/lib/cvmfs/shared
CVMFS_CHECK_PERMISSIONS=yes    # from /etc/cvmfs/default.conf
CVMFS_CLAIM_OWNERSHIP=yes    # from /etc/cvmfs/default.conf
CVMFS_CLIENT_PROFILE=single    # from /etc/cvmfs/default.local
CVMFS_CONFIG_REPO_DEFAULT_ENV=1    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_CONFIG_REPOSITORY=cvmfs-config.cern.ch    # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_DEFAULT_DOMAIN=cern.ch    # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_FALLBACK_PROXY=    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_HOST_RESET_AFTER=1800    # from /etc/cvmfs/default.conf
CVMFS_HTTP_PROXY=DIRECT    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_KCACHE_TIMEOUT=2    # from /etc/cvmfs/default.local
CVMFS_KEYS_DIR=/cvmfs/cvmfs-config.cern.ch/etc/cvmfs/keys/cern.ch    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_LOW_SPEED_LIMIT=1024    # from /etc/cvmfs/default.conf
CVMFS_MAX_RETRIES=3    # from /etc/cvmfs/default.local
CVMFS_MOUNT_DIR=/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_NFILES=131072    # from /etc/cvmfs/default.conf
CVMFS_PAC_URLS='http://grid-wpad/wpad.dat;http://wpad/wpad.dat;http://cernvm-wpad.cern.ch/wpad.dat;http://cernvm-wpad.fnal.gov/wpad.dat'    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_PROXY_RESET_AFTER=300    # from /etc/cvmfs/default.conf
CVMFS_QUOTA_LIMIT=4000    # from /etc/cvmfs/default.conf
CVMFS_RELOAD_SOCKETS=/var/run/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_REPOSITORIES=atlas.cern.ch,atlas-condb.cern.ch,lhcb.cern.ch,alice.cern.ch,alice-ocdb.cern.ch,grid.cern.ch,cms.cern.ch,sft.cern.ch,geant4.cern.ch,na61.cern.ch,boss.cern.ch    # from /etc/cvmfs/default.local
CVMFS_SEND_INFO_HEADER=yes    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_SERVER_URL='http://cvmfs-stratum-one.cern.ch:8000/cvmfs/cms.cern.ch;http://cernvmfs.gridpp.rl.ac.uk:8000/cvmfs/cms.cern.ch;http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/cms.cern.ch;http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/cms.cern.ch;http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/cms.cern.ch;http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/cms.cern.ch'    # from /etc/cvmfs/domain.d/cern.ch.confCVMFS_SHARED_CACHE=yes    # from /etc/cvmfs/default.conf
CVMFS_STRICT_MOUNT=no    # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT=5    # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT_DIRECT=10    # from /etc/cvmfs/default.conf
CVMFS_USE_GEOAPI=yes    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_USER=cvmfs    # from /etc/cvmfs/default.conf

Running /usr/bin/cvmfs_config -s sft.cern.ch:
CVMFS_REPOSITORY_NAME=sft.cern.ch
CVMFS_BACKOFF_INIT=2    # from /etc/cvmfs/default.conf
CVMFS_BACKOFF_MAX=10    # from /etc/cvmfs/default.conf
CVMFS_BASE_ENV=1    # from /etc/cvmfs/default.conf
CVMFS_CACHE_BASE=/var/lib/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_CACHE_DIR=/var/lib/cvmfs/shared
CVMFS_CHECK_PERMISSIONS=yes    # from /etc/cvmfs/default.conf
CVMFS_CLAIM_OWNERSHIP=yes    # from /etc/cvmfs/default.conf
CVMFS_CLIENT_PROFILE=single    # from /etc/cvmfs/default.local
CVMFS_CONFIG_REPO_DEFAULT_ENV=1    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_CONFIG_REPOSITORY=cvmfs-config.cern.ch    # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_DEFAULT_DOMAIN=cern.ch    # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_FALLBACK_PROXY=    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_HOST_RESET_AFTER=1800    # from /etc/cvmfs/default.conf
CVMFS_HTTP_PROXY=DIRECT    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_KCACHE_TIMEOUT=2    # from /etc/cvmfs/default.local
CVMFS_KEYS_DIR=/cvmfs/cvmfs-config.cern.ch/etc/cvmfs/keys/cern.ch    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_LOW_SPEED_LIMIT=1024    # from /etc/cvmfs/default.conf
CVMFS_MAX_RETRIES=3    # from /etc/cvmfs/default.local
CVMFS_MOUNT_DIR=/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_NFILES=131072    # from /etc/cvmfs/default.conf
CVMFS_PAC_URLS='http://grid-wpad/wpad.dat;http://wpad/wpad.dat;http://cernvm-wpad.cern.ch/wpad.dat;http://cernvm-wpad.fnal.gov/wpad.dat'    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_PROXY_RESET_AFTER=300    # from /etc/cvmfs/default.conf
CVMFS_QUOTA_LIMIT=4000    # from /etc/cvmfs/default.conf
CVMFS_RELOAD_SOCKETS=/var/run/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_REPOSITORIES=atlas.cern.ch,atlas-condb.cern.ch,lhcb.cern.ch,alice.cern.ch,alice-ocdb.cern.ch,grid.cern.ch,cms.cern.ch,sft.cern.ch,geant4.cern.ch,na61.cern.ch,boss.cern.ch    # from /etc/cvmfs/default.local
CVMFS_SEND_INFO_HEADER=yes    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_SERVER_URL='http://cvmfs-stratum-one.cern.ch:8000/cvmfs/sft.cern.ch;http://cernvmfs.gridpp.rl.ac.uk:8000/cvmfs/sft.cern.ch;http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/sft.cern.ch;http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/sft.cern.ch;http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/sft.cern.ch;http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/sft.cern.ch'    # from /etc/cvmfs/domain.d/cern.ch.confCVMFS_SHARED_CACHE=yes    # from /etc/cvmfs/default.conf
CVMFS_STRICT_MOUNT=no    # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT=5    # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT_DIRECT=10    # from /etc/cvmfs/default.conf
CVMFS_USE_GEOAPI=yes    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_USER=cvmfs    # from /etc/cvmfs/default.conf

Running /usr/bin/cvmfs_config -s geant4.cern.ch:
CVMFS_REPOSITORY_NAME=geant4.cern.ch
CVMFS_BACKOFF_INIT=2    # from /etc/cvmfs/default.conf
CVMFS_BACKOFF_MAX=10    # from /etc/cvmfs/default.conf
CVMFS_BASE_ENV=1    # from /etc/cvmfs/default.conf
CVMFS_CACHE_BASE=/var/lib/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_CACHE_DIR=/var/lib/cvmfs/shared
CVMFS_CHECK_PERMISSIONS=yes    # from /etc/cvmfs/default.conf
CVMFS_CLAIM_OWNERSHIP=yes    # from /etc/cvmfs/default.conf
CVMFS_CLIENT_PROFILE=single    # from /etc/cvmfs/default.local
CVMFS_CONFIG_REPO_DEFAULT_ENV=1    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_CONFIG_REPOSITORY=cvmfs-config.cern.ch    # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_DEFAULT_DOMAIN=cern.ch    # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_FALLBACK_PROXY=    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_HOST_RESET_AFTER=1800    # from /etc/cvmfs/default.conf
CVMFS_HTTP_PROXY=DIRECT    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_KCACHE_TIMEOUT=2    # from /etc/cvmfs/default.local
CVMFS_KEYS_DIR=/cvmfs/cvmfs-config.cern.ch/etc/cvmfs/keys/cern.ch    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_LOW_SPEED_LIMIT=1024    # from /etc/cvmfs/default.conf
CVMFS_MAX_RETRIES=3    # from /etc/cvmfs/default.local
CVMFS_MOUNT_DIR=/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_NFILES=131072    # from /etc/cvmfs/default.conf
CVMFS_PAC_URLS='http://grid-wpad/wpad.dat;http://wpad/wpad.dat;http://cernvm-wpad.cern.ch/wpad.dat;http://cernvm-wpad.fnal.gov/wpad.dat'    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_PROXY_RESET_AFTER=300    # from /etc/cvmfs/default.conf
CVMFS_QUOTA_LIMIT=4000    # from /etc/cvmfs/default.conf
CVMFS_RELOAD_SOCKETS=/var/run/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_REPOSITORIES=atlas.cern.ch,atlas-condb.cern.ch,lhcb.cern.ch,alice.cern.ch,alice-ocdb.cern.ch,grid.cern.ch,cms.cern.ch,sft.cern.ch,geant4.cern.ch,na61.cern.ch,boss.cern.ch    # from /etc/cvmfs/default.local
CVMFS_SEND_INFO_HEADER=yes    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_SERVER_URL='http://cvmfs-stratum-one.cern.ch:8000/cvmfs/geant4.cern.ch;http://cernvmfs.gridpp.rl.ac.uk:8000/cvmfs/geant4.cern.ch;http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/geant4.cern.ch;http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/geant4.cern.ch;http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/geant4.cern.ch;http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/geant4.cern.ch'    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_SHARED_CACHE=yes    # from /etc/cvmfs/default.conf
CVMFS_STRICT_MOUNT=no    # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT=5    # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT_DIRECT=10    # from /etc/cvmfs/default.conf
CVMFS_USE_GEOAPI=yes    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_USER=cvmfs    # from /etc/cvmfs/default.conf

Running /usr/bin/cvmfs_config -s na61.cern.ch:
CVMFS_REPOSITORY_NAME=na61.cern.ch
CVMFS_BACKOFF_INIT=2    # from /etc/cvmfs/default.conf
CVMFS_BACKOFF_MAX=10    # from /etc/cvmfs/default.conf
CVMFS_BASE_ENV=1    # from /etc/cvmfs/default.conf
CVMFS_CACHE_BASE=/var/lib/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_CACHE_DIR=/var/lib/cvmfs/shared
CVMFS_CHECK_PERMISSIONS=yes    # from /etc/cvmfs/default.conf
CVMFS_CLAIM_OWNERSHIP=yes    # from /etc/cvmfs/default.conf
CVMFS_CLIENT_PROFILE=single    # from /etc/cvmfs/default.local
CVMFS_CONFIG_REPO_DEFAULT_ENV=1    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_CONFIG_REPOSITORY=cvmfs-config.cern.ch    # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_DEFAULT_DOMAIN=cern.ch    # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_FALLBACK_PROXY=    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_HOST_RESET_AFTER=1800    # from /etc/cvmfs/default.conf
CVMFS_HTTP_PROXY=DIRECT    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_KCACHE_TIMEOUT=2    # from /etc/cvmfs/default.local
CVMFS_KEYS_DIR=/cvmfs/cvmfs-config.cern.ch/etc/cvmfs/keys/cern.ch    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_LOW_SPEED_LIMIT=1024    # from /etc/cvmfs/default.conf
CVMFS_MAX_RETRIES=3    # from /etc/cvmfs/default.local
CVMFS_MOUNT_DIR=/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_NFILES=131072    # from /etc/cvmfs/default.conf
CVMFS_PAC_URLS='http://grid-wpad/wpad.dat;http://wpad/wpad.dat;http://cernvm-wpad.cern.ch/wpad.dat;http://cernvm-wpad.fnal.gov/wpad.dat'    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_PROXY_RESET_AFTER=300    # from /etc/cvmfs/default.conf
CVMFS_QUOTA_LIMIT=4000    # from /etc/cvmfs/default.conf
CVMFS_RELOAD_SOCKETS=/var/run/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_REPOSITORIES=atlas.cern.ch,atlas-condb.cern.ch,lhcb.cern.ch,alice.cern.ch,alice-ocdb.cern.ch,grid.cern.ch,cms.cern.ch,sft.cern.ch,geant4.cern.ch,na61.cern.ch,boss.cern.ch    # from /etc/cvmfs/default.local
CVMFS_SEND_INFO_HEADER=yes    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_SERVER_URL='http://cvmfs-stratum-one.cern.ch:8000/cvmfs/na61.cern.ch;http://cernvmfs.gridpp.rl.ac.uk:8000/cvmfs/na61.cern.ch;http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/na61.cern.ch;http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/na61.cern.ch;http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/na61.cern.ch;http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/na61.cern.ch'    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_SHARED_CACHE=yes    # from /etc/cvmfs/default.conf
CVMFS_STRICT_MOUNT=no    # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT=5    # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT_DIRECT=10    # from /etc/cvmfs/default.conf
CVMFS_USE_GEOAPI=yes    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_USER=cvmfs    # from /etc/cvmfs/default.conf

Running /usr/bin/cvmfs_config -s boss.cern.ch:
CVMFS_REPOSITORY_NAME=boss.cern.ch
CVMFS_BACKOFF_INIT=2    # from /etc/cvmfs/default.conf
CVMFS_BACKOFF_MAX=10    # from /etc/cvmfs/default.conf
CVMFS_BASE_ENV=1    # from /etc/cvmfs/default.conf
CVMFS_CACHE_BASE=/var/lib/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_CACHE_DIR=/var/lib/cvmfs/shared
CVMFS_CHECK_PERMISSIONS=yes    # from /etc/cvmfs/default.conf
CVMFS_CLAIM_OWNERSHIP=yes    # from /etc/cvmfs/default.conf
CVMFS_CLIENT_PROFILE=single    # from /etc/cvmfs/default.local
CVMFS_CONFIG_REPO_DEFAULT_ENV=1    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_CONFIG_REPOSITORY=cvmfs-config.cern.ch    # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_DEFAULT_DOMAIN=cern.ch    # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_FALLBACK_PROXY=    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_HOST_RESET_AFTER=1800    # from /etc/cvmfs/default.conf
CVMFS_HTTP_PROXY=DIRECT    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_KCACHE_TIMEOUT=2    # from /etc/cvmfs/default.local
CVMFS_KEYS_DIR=/cvmfs/cvmfs-config.cern.ch/etc/cvmfs/keys/cern.ch    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_LOW_SPEED_LIMIT=1024    # from /etc/cvmfs/default.conf
CVMFS_MAX_RETRIES=3    # from /etc/cvmfs/default.local
CVMFS_MOUNT_DIR=/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_NFILES=131072    # from /etc/cvmfs/default.conf
CVMFS_PAC_URLS='http://grid-wpad/wpad.dat;http://wpad/wpad.dat;http://cernvm-wpad.cern.ch/wpad.dat;http://cernvm-wpad.fnal.gov/wpad.dat'    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_PROXY_RESET_AFTER=300    # from /etc/cvmfs/default.conf
CVMFS_QUOTA_LIMIT=4000    # from /etc/cvmfs/default.conf
CVMFS_RELOAD_SOCKETS=/var/run/cvmfs    # from /etc/cvmfs/default.conf
CVMFS_REPOSITORIES=atlas.cern.ch,atlas-condb.cern.ch,lhcb.cern.ch,alice.cern.ch,alice-ocdb.cern.ch,grid.cern.ch,cms.cern.ch,sft.cern.ch,geant4.cern.ch,na61.cern.ch,boss.cern.ch    # from /etc/cvmfs/default.local
CVMFS_SEND_INFO_HEADER=yes    # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_SERVER_URL='http://cvmfs-stratum-one.cern.ch:8000/cvmfs/boss.cern.ch;http://cernvmfs.gridpp.rl.ac.uk:8000/cvmfs/boss.cern.ch;http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/boss.cern.ch;http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/boss.cern.ch;http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/boss.cern.ch;http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/boss.cern.ch'    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_SHARED_CACHE=yes    # from /etc/cvmfs/default.conf
CVMFS_STRICT_MOUNT=no    # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT=5    # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT_DIRECT=10    # from /etc/cvmfs/default.conf
CVMFS_USE_GEOAPI=yes    # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_USER=cvmfs    # from /etc/cvmfs/default.conf
[/quote]
2) Message boards : Theory Application : Probing /cvmfs/sft.cern.ch... Failed! (Message 45035)
Posted 28 May 2021 by vigilian
Post:
Run the following command as root and post the output here:
cvmfs_config showconfig -s


will do it tomorrow and so I'm letting a few tasks to complete correctly just to be assertive that everything is goign okey.
3) Message boards : Theory Application : Probing /cvmfs/sft.cern.ch... Failed! (Message 45033)
Posted 28 May 2021 by vigilian
Post:
so indeed with singularity installed locally build from source for the specific distribution it is working correctly and it's crunching as we speak.


But I still would like an explanation on which misconfiguration I would have done in CVMFS to have those missing folders in /var/lib which according to what I understand doesn't concern CVMFS. And by the same way, that you confirm me or not if CVMFS is there as a filesystem layer to apply an filesystem image downloadede from the project script?

Also I would like to know how to make the differences between what I need as a repo and what I don't need because again, your command didn't check out with the embedded tools which I remind you was(from one of your previous post in the links you posted)
CVMFS_REPOSITORIES="atlas,atlas-condb,grid,cernvm-prod,sft,alice
4) Message boards : ATLAS application : Guide for building everything from sources to run native ATLAS on Debian 9 (Stretch) Version 2 (Message 45032)
Posted 28 May 2021 by vigilian
Post:
no okey I didn't notice that you were referencing another step by step than the original post from this thread. With yours I 've succeeded to install 3.7.2 :)
thanks
5) Message boards : Theory Application : Probing /cvmfs/sft.cern.ch... Failed! (Message 45031)
Posted 28 May 2021 by vigilian
Post:
Then, I m sure it s a stupid question according to your previous comments, but if singularity downloaded from the project itself is useless (because convoying a lot of errors on other distributions) then why is there not a test to to see which distribution is running and according to that allow or prevent the download of singularity ?

We don’t care then about the warnings, duly noted. It doesn’t explain why, if those files are really needed, any curl command did not succeed. But okey let’s put that aside for the moment.

For the repo nowhere the information was clear as it should have been. So what did I do as common sense, I took the most complete list I could find on the forum which was not from a post that long ago. At least more recently than what you gave yesterday.

As I said also, your configuration line was not working. The chkconfig was complaining about it. Proof that your thread about is not totally correct. Or then the tools is not coded properly, one of the two.

Don’t understand this phrase
You don't need those repos but configured them.


For the use of openhtc it’s simple it was apparently unreachable at the time. And again what do we do first, we use the direct link, not a mirror to access something. Especially as a first try and I don’t see how it can actually be a problem on short terms.

Your explanation about the misconfiguration, first I don’t see what would I miss.
Secondly the folders, that several people have in their error log should I point out, are not in any way referencing to cvmfs since it was in /var/lib and not in /etc/cvmfs

For what I understood is that the project make us download an image of the file system it wants to apply and that’s what got uncompressed and then mounted. If then the application is complaining about not having some folder present in cvmfs itself (not the same thing than the previous thing I’ve just described) I don’t’ see how I could interfere with the process to resolve it. And to my knowledge and what I ve read, nobody having those errors have succeeded to find a solution or they stop responding. And at some point across my readings I even spotted you or one of you colleague acinowledged that there was an error in the package delivered so….

And so I guess that even if cvmfs is delivered in all flavors apparently on the page project you only prefer that people ran Centos Vm then ?(which is now deprecated)
6) Message boards : ATLAS application : Creation of container failed (Message 45029)
Posted 28 May 2021 by vigilian
Post:
When you read the messages in the folders, you can find the answer!
Singularity is downloaded from the Atlas-Server, when local no singularity could be found!


yes and that's exactly where the emssage is coming from... from the singularity downloaded from the server and apparently ther eis something wrong with the package since he can't find some folder from the img donwloaded @maeax
do you want maybe the task unit to prove it to you?
here it is:
https://lhcathome.cern.ch/lhcathome/result.php?resultid=317300586
and as you can see the singularity is downloaded, probed as working fine but still there are pieces missing which echoes the messages posted earlier in this thread for example this one:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5647&postid=44897
so since I'm not the only one let's not pretend that the package is okey... OR there are some missing informations then
7) Message boards : ATLAS application : Guide for building everything from sources to run native ATLAS on Debian 9 (Stretch) Version 2 (Message 45028)
Posted 28 May 2021 by vigilian
Post:
Singularity 3.7.3 is current.
I updated the procedure mentioned above.



which go version did yo use? because @computezrmle mentioned in another thread that he tried 1.15 and it didn't work but 1.14 was working. I just tried to build the dependencies for go 1.14.5 which is an archived version and I can get all the deps. So did you use the last go version and succeeded to build singularity? or did you use an older one?



github.com/golang/dep/gps
# github.com/golang/dep/gps
/root/go/pkg/mod/github.com/golang/dep@v0.5.4/gps/constraint.go:103:21: cannot use sv (type *semver.Version) as type semver.Version in field value
/root/go/pkg/mod/github.com/golang/dep@v0.5.4/gps/constraint.go:122:16: invalid type assertion: c.(semver.Version) (non-interface type *semver.Constraints on left)
/root/go/pkg/mod/github.com/golang/dep@v0.5.4/gps/constraint.go:149:4: undefined: semver.Constraint
8) Message boards : ATLAS application : Guide for building everything from sources to run native ATLAS on Debian 9 (Stretch) Version 2 (Message 45027)
Posted 28 May 2021 by vigilian
Post:
Singularity 3.7.3 is current.
I updated the procedure mentioned above.



which go version did yo use? because @computezrmle mentioned in another thread that he tried 1.15 and it didn't work but 1.14 was working. I just tried to build the dependencies for go 1.14.5 which is an archived version and I can get all the deps. So did you use the last go version and succeeded to build singularity? or did you use an older one?
9) Message boards : ATLAS application : Creation of container failed (Message 45025)
Posted 28 May 2021 by vigilian
Post:
Looking into errors, I found that the problem is the use of wrong path by container :
09:56:16 (130425): wrapper (7.7.26015): starting
09:56:16 (130425): wrapper: running run_atlas (--nthreads 6)
[2021-05-06 09:56:16] Arguments: --nthreads 6
[2021-05-06 09:56:16] Threads: 6
[2021-05-06 09:56:16] Checking for CVMFS
[2021-05-06 09:56:16] Probing /cvmfs/atlas.cern.ch... OK
[2021-05-06 09:56:16] Probing /cvmfs/atlas-condb.cern.ch... OK
[2021-05-06 09:56:16] Running cvmfs_config stat atlas.cern.ch
[2021-05-06 09:56:16] VERSION PID UPTIME(M) MEM(K) REVISION EXPIRES(M) NOCATALOGS CACHEUSE(K) CACHEMAX(K) NOFDUSE NOFDMAX NOIOERR NOOPEN HITRATE(%) RX(K) SPEED(K/S) HOST PROXY ONLINE
[2021-05-06 09:56:16] 2.8.1.0 129901 5 25280 84062 2 3 4718377 6144001 0 130560 0 36 99.909 528 724 http://s1ral-cvmfs.openhtc.io/cvmfs/atlas.cern.ch http://192.168.2.1:3128 1
[2021-05-06 09:56:16] CVMFS is ok
[2021-05-06 09:56:16] Using singularity image /cvmfs/atlas.cern.ch/repo/containers/images/singularity/x86_64-centos7.img
[2021-05-06 09:56:16] Checking for singularity binary...
[2021-05-06 09:56:16] Singularity is not installed, using version from CVMFS
[2021-05-06 09:56:16] Checking singularity works with /cvmfs/atlas.cern.ch/repo/containers/sw/singularity/x86_64-el7/current/bin/singularity exec -B /cvmfs /cvmfs/atlas.cern.ch/repo/containers/images/singularity/x86_64-centos7.img hostname
[2021-05-06 09:56:18] INFO: Converting SIF file to temporary sandbox... fonck INFO: Cleaning up image...
[2021-05-06 09:56:18] Singularity works
[2021-05-06 09:56:18] Set ATHENA_PROC_NUMBER=6
[2021-05-06 09:56:18] Starting ATLAS job with PandaID=5047215526
[2021-05-06 09:56:18] Running command: /cvmfs/atlas.cern.ch/repo/containers/sw/singularity/x86_64-el7/current/bin/singularity exec --pwd /var/lib/boinc-client/slots/7 -B /cvmfs,/var /cvmfs/atlas.cern.ch/repo/containers/images/singularity/x86_64-centos7.img sh start_atlas.sh
[2021-05-06 09:56:20] Job failed
[2021-05-06 09:56:20] INFO:    Converting SIF file to temporary sandbox...
[2021-05-06 09:56:20] INFO:    Cleaning up image...
[2021-05-06 09:56:20] FATAL:   container creation failed: mount /cvmfs/atlas.cern.ch/repo/containers/sw/singularity/x86_64-el7/3.7.2/var/singularity/mnt/session/rootfs/var/lib/package-list->/cvmfs/atlas.cern.ch/repo/containers/sw/singularity/x86_64-el7/3.7.2/var/singularity/mnt/session/underlay/var/lib/package-list error: while mounting /cvmfs/atlas.cern.ch/repo/containers/sw/singularity/x86_64-el7/3.7.2/var/singularity/mnt/session/rootfs/var/lib/package-list: destination /cvmfs/atlas.cern.ch/repo/containers/sw/singularity/x86_64-el7/3.7.2/var/singularity/mnt/session/underlay/var/lib/package-list doesn't exist in container
[2021-05-06 09:56:20] ./runtime_log.err
[2021-05-06 09:56:20] ./runtime_log

For testing, I changed temporarily write permission for /var/lib :
sudo chmod o+w /var/lib/

As you can see, all files are created there ...
$ ls -lh /var/lib/|grep boinc
drwxr-xr-x  2 boinc         boinc         4,0K mai    5 23:49 alternatives
lrwxrwxrwx  1 boinc         boinc           12 mai   22  2018 boinc -> boinc-client
drwxr-xr-x  8 boinc         boinc         4,0K mai    6 09:57 boinc-client
drwxr-xr-x  2 boinc         boinc         4,0K mai    6 00:12 condor
drwxr-xr-x  2 boinc         boinc         4,0K mai    6 00:28 cs
drwxr-xr-x  2 boinc         boinc         4,0K mai    6 09:47 games
drwxr-xr-x  2 boinc         boinc         4,0K mai    6 09:56 gssproxy
drwxr-xr-x  2 boinc         boinc         4,0K mai    6 09:56 initramfs
drwxr-xr-x  2 boinc         boinc         4,0K mai    6 09:56 machines
drwxr-xr-x  2 boinc         boinc         4,0K mai    6 09:56 ntp
-rw-r--r--  1 boinc         boinc            0 mai    6 09:56 package-list
drwxr-xr-x  2 boinc         boinc         4,0K mai    6 09:56 rpcbind
drwxr-xr-x  2 boinc         boinc         4,0K mai    6 09:56 rpm
drwxr-xr-x  2 boinc         boinc         4,0K mai    6 09:56 rpm-state
drwxr-xr-x  2 boinc         boinc         4,0K mai    6 09:56 texmf

There are some parts missing at the mount point :
$ ls /cvmfs/atlas.cern.ch/repo/containers/sw/singularity/x86_64-el7/3.7.2/var/singularity/mnt/session/
$


same problem where I've been posting all night long here :

https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5633&postid=45024#45024

so is the local installation of singularity the only available solution out there?
10) Message boards : Theory Application : Probing /cvmfs/sft.cern.ch... Failed! (Message 45024)
Posted 28 May 2021 by vigilian
Post:
okey so for recap
for those who like me have been created your VM through a script or to a vm platform like cockpit or even azure or others you are ending up short on folders.
So you need to create



    alternatives
    condor
    cs
    games
    gssproxy
    initramfs
    machines
    ntp
    package-list
    rpcbind
    rpm-state
    texmf
    tpm



the only problme now is that I got this:

[2021-05-28 08:11:34] FATAL:   container creation failed: mount /cvmfs/atlas.cern.ch/repo/containers/sw/singularity/x86_64-el7/3.7.3/var/singularity/mnt/session/rootfs/var/lib/package-list->/cvmfs/atlas.cern.ch/repo/containers/sw/singularity/x86_64-el7/3.7.3/var/singularity/mnt/session/underlay/var/lib/package-list error: while mounting /cvmfs/atlas.cern.ch/repo/containers/sw/singularity/x86_64-el7/3.7.3/var/singularity/mnt/session/rootfs/var/lib/package-list: destination /cvmfs/atlas.cern.ch/repo/containers/sw/singularity/x86_64-el7/3.7.3/var/singularity/mnt/session/underlay/var/lib/package-list doesn't exist in container


but that's in the cern filesystem itself that nobody should touch since it's handled automatically or "should be" handled automatically ... So what's the solutio nto that if there is even one?
11) Message boards : Theory Application : Probing /cvmfs/sft.cern.ch... Failed! (Message 45023)
Posted 28 May 2021 by vigilian
Post:
and also The cvmfs according to the task failing are configured okey
but it can't extract files apparently :
root filesystem extraction failed: could not extract squashfs data, unsquashfs not found

from a real VM ubuntu 20.04.2
any ideas about that?

and your:
CVMFS_REPOSITORIES="atlas,atlas-condb,grid,cernvm-prod,sft,alice"

from your post : https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5594
is not accurated. Apparently it really needs hardcoded domain and not simple variable. At least the chkconfig is complaining about it.
still on a a real VM ubuntu and also in wsl

I will make an update thread on how to do things later when I finished debug this.


okey so resolved the squashfs problem with https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5611 which I didn't get in my previous search and I don't why it didn't got installed as a requirement but okey.
The tasks are loading now,
I had a problem with /var/lib/alternatives that was not created and so apparently the "container" couldn't create it because of permissions so I've created it for him.

Now there is a task loading but not crunching apparently and I don't know why for the moment so I'm waiting for it to fail so that I know what is wrong with it.



now it's the turn of /var/lib/condor, same problem so I did the same thing, will see for the next task. Are there a lot of directories like that ? Seems not normal that boinc user that have been created automatically by the script from apt can't created what it needs.... And it seems to be happening again. Beginning to even know in advance if the atlas.sh is going down or not... This is getting old by the minute.
12) Message boards : Theory Application : Probing /cvmfs/sft.cern.ch... Failed! (Message 45022)
Posted 28 May 2021 by vigilian
Post:
and also The cvmfs according to the task failing are configured okey
but it can't extract files apparently :
root filesystem extraction failed: could not extract squashfs data, unsquashfs not found

from a real VM ubuntu 20.04.2
any ideas about that?

and your:
CVMFS_REPOSITORIES="atlas,atlas-condb,grid,cernvm-prod,sft,alice"

from your post : https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5594
is not accurated. Apparently it really needs hardcoded domain and not simple variable. At least the chkconfig is complaining about it.
still on a a real VM ubuntu and also in wsl

I will make an update thread on how to do things later when I finished debug this.


okey so resolved the squashfs problem with https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5611 which I didn't get in my previous search and I don't why it didn't got installed as a requirement but okey.
The tasks are loading now,
I had a problem with /var/lib/alternatives that was not created and so apparently the "container" couldn't create it because of permissions so I've created it for him.

Now there is a task loading but not crunching apparently and I don't know why for the moment so I'm waiting for it to fail so that I know what is wrong with it.
13) Message boards : Theory Application : Probing /cvmfs/sft.cern.ch... Failed! (Message 45021)
Posted 27 May 2021 by vigilian
Post:
and also The cvmfs according to the task failing are configured okey
but it can't extract files apparently :
root filesystem extraction failed: could not extract squashfs data, unsquashfs not found

from a real VM ubuntu 20.04.2
any ideas about that?

and your:
CVMFS_REPOSITORIES="atlas,atlas-condb,grid,cernvm-prod,sft,alice"

from your post : https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5594
is not accurated. Apparently it really needs hardcoded domain and not simple variable. At least the chkconfig is complaining about it.
still on a a real VM ubuntu and also in wsl

I will make an update thread on how to do things later when I finished debug this.
14) Message boards : Theory Application : Probing /cvmfs/sft.cern.ch... Failed! (Message 45020)
Posted 27 May 2021 by vigilian
Post:
those thread are outdated for some parts.

the repositories in particular are not working and the wsl2 from the doc is not working either.

for the repo


root@ubucern:~# cvmfs_config chksetup
Warning: failed to resolve auto proxy for http://cvmfs-stratum-one.cern.ch:8000/cvmfs/atlas.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cernvmfs.gridpp.rl.ac.uk:8000/cvmfs/atlas.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/atlas.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/atlas.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/atlas.cern.ch/.cvmfspublished
Warning: failed to access http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/atlas.cern.ch/.cvmfspublished through proxy DIRECT
Warning: failed to resolve auto proxy for http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/atlas.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-stratum-one.cern.ch:8000/cvmfs/atlas-condb.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cernvmfs.gridpp.rl.ac.uk:8000/cvmfs/atlas-condb.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/atlas-condb.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/atlas-condb.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/atlas-condb.cern.ch/.cvmfspublished
Warning: failed to access http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/atlas-condb.cern.ch/.cvmfspublished through proxy DIRECT
Warning: failed to resolve auto proxy for http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/atlas-condb.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-stratum-one.cern.ch:8000/cvmfs/lhcb.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cernvmfs.gridpp.rl.ac.uk:8000/cvmfs/lhcb.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/lhcb.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/lhcb.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/lhcb.cern.ch/.cvmfspublished
Warning: failed to access http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/lhcb.cern.ch/.cvmfspublished through proxy DIRECT
Warning: failed to resolve auto proxy for http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/lhcb.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-stratum-one.cern.ch:8000/cvmfs/alice.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cernvmfs.gridpp.rl.ac.uk:8000/cvmfs/alice.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/alice.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/alice.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/alice.cern.ch/.cvmfspublished
Warning: failed to access http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/alice.cern.ch/.cvmfspublished through proxy DIRECT
Warning: failed to resolve auto proxy for http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/alice.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-stratum-one.cern.ch:8000/cvmfs/alice-ocdb.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cernvmfs.gridpp.rl.ac.uk:8000/cvmfs/alice-ocdb.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/alice-ocdb.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/alice-ocdb.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/alice-ocdb.cern.ch/.cvmfspublished
Warning: failed to access http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/alice-ocdb.cern.ch/.cvmfspublished through proxy DIRECT
Warning: failed to resolve auto proxy for http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/alice-ocdb.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-stratum-one.cern.ch:8000/cvmfs/grid.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cernvmfs.gridpp.rl.ac.uk:8000/cvmfs/grid.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/grid.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/grid.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/grid.cern.ch/.cvmfspublished
Warning: failed to access http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/grid.cern.ch/.cvmfspublished through proxy DIRECT
Warning: failed to resolve auto proxy for http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/grid.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-stratum-one.cern.ch:8000/cvmfs/cms.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cernvmfs.gridpp.rl.ac.uk:8000/cvmfs/cms.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/cms.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/cms.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/cms.cern.ch/.cvmfspublished
Warning: failed to access http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/cms.cern.ch/.cvmfspublished through proxy DIRECT
Warning: failed to resolve auto proxy for http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/cms.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-stratum-one.cern.ch:8000/cvmfs/sft.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cernvmfs.gridpp.rl.ac.uk:8000/cvmfs/sft.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/sft.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/sft.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/sft.cern.ch/.cvmfspublished
Warning: failed to access http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/sft.cern.ch/.cvmfspublished through proxy DIRECT
Warning: failed to resolve auto proxy for http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/sft.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-stratum-one.cern.ch:8000/cvmfs/geant4.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cernvmfs.gridpp.rl.ac.uk:8000/cvmfs/geant4.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/geant4.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/geant4.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/geant4.cern.ch/.cvmfspublished
Warning: failed to access http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/geant4.cern.ch/.cvmfspublished through proxy DIRECT
Warning: failed to resolve auto proxy for http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/geant4.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-stratum-one.cern.ch:8000/cvmfs/na61.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cernvmfs.gridpp.rl.ac.uk:8000/cvmfs/na61.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/na61.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/na61.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/na61.cern.ch/.cvmfspublished
Warning: failed to access http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/na61.cern.ch/.cvmfspublished through proxy DIRECT
Warning: failed to resolve auto proxy for http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/na61.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-stratum-one.cern.ch:8000/cvmfs/boss.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cernvmfs.gridpp.rl.ac.uk:8000/cvmfs/boss.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/boss.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/boss.cern.ch/.cvmfspublished
Warning: failed to resolve auto proxy for http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/boss.cern.ch/.cvmfspublished
Warning: failed to access http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/boss.cern.ch/.cvmfspublished through proxy DIRECT
Warning: failed to resolve auto proxy for http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/boss.cern.ch/.cvmfspublished


and for the wsl part the probing is failing + the repo same problem

$ cvmfs_config probe
Probing /cvmfs/atlas.cern.ch... Failed!
Probing /cvmfs/atlas-condb.cern.ch... Failed!
Probing /cvmfs/lhcb.cern.ch... Failed!
Probing /cvmfs/alice.cern.ch... Failed!
Probing /cvmfs/alice-ocdb.cern.ch... Failed!
Probing /cvmfs/grid.cern.ch... Failed!
Probing /cvmfs/cms.cern.ch... Failed!
Probing /cvmfs/sft.cern.ch... Failed!
Probing /cvmfs/geant4.cern.ch... Failed!
Probing /cvmfs/na61.cern.ch... Failed!
Probing /cvmfs/boss.cern.ch... Failed!


and indeed i tried to reach those repo and I got a 404 error. It's not a network problem I've tried from different location
15) Message boards : Theory Application : Probing /cvmfs/sft.cern.ch... Failed! (Message 45007)
Posted 25 May 2021 by vigilian
Post:
Your task lists don't show a Theory task except an old one from 2019.

Part of your problem might be that you ask for ATLAS native.
Each ATLAS task downloads an EVNT file of up to 420 MB but all of them fail after a few minutes since you didn't install a local CVMFS client:
[2021-04-26 20:32:27] Checking for CVMFS
[2021-04-26 20:32:27] No cvmfs_config command found, will try listing directly
[2021-04-26 20:32:27] ls: cannot access '/cvmfs/atlas.cern.ch/repo/sw': No such file or directory
[2021-04-26 20:32:27] Failed to list /cvmfs/atlas.cern.ch/repo/sw


You also ask for beta apps (ATLAS native long).
Why? This makes no sense as long as you do not even get the normal ones running.



Could you point us to the pinned message about how to install a cvmfs client maybe ? I think that since it's named native, people think as I would have done, that it's just download and run...
16) Questions and Answers : Windows : All vBox WU in error (Message 43981)
Posted 22 Dec 2020 by vigilian
Post:
well yeah sure I could use the 6.1.11 but I'm not having the habit to downgrade especially when at each iteration there are so much CVE corrected and bugs.

22/12/2020 12:02:23 | LHC@home | Scheduler request completed: got 0 new tasks
22/12/2020 12:02:23 | LHC@home | No tasks sent
22/12/2020 12:02:23 | LHC@home | No tasks are available for SixTrack
22/12/2020 12:02:23 | LHC@home | No tasks are available for sixtracktest
22/12/2020 12:02:23 | LHC@home | No tasks are available for Theory Simulation
22/12/2020 12:02:23 | LHC@home | No tasks are available for ATLAS Simulation
22/12/2020 12:02:23 | LHC@home | Project requested delay of 6 seconds


seems important to me or at least I'm using all those functions:
Serial: Fixed blocking a re-connect when TCP mode is used (bug #19878) 
HPET: Fixed inability of guests to use the last timer 
Linux host and guest: Support kernel version 5.9 (bug #19845) 
Linux guest: Fixed Guest additions build for RHEL 8.3 beta (bug #19863) 
Linux guest: Fixed VBoxService crashing in the CPU hot-plug service under certain circumstances during a CPU hot-unplug event (bugs #19902 and #19903) 
GUI: Fixes file name changes in the File location field when creating Virtual Hard Disk (bug #19286) 
Linux host and guest: Linux kernel version 5.8 support 

Guest Additions: Improved resize coverage for VMSVGA graphics controller
Guest Additions: Fixed issues detecting guest additions ISO at runtime 
VBoxManage: Fixed command option parsing for the "snapshot edit" sub-command
VBoxManage: Fixed crash of 'VBoxManage internalcommands repairhd' when processing invalid input (bug #19579)
Guest Additions, 3D: New experimental GLX graphics output
Guest Additions, 3D: Fixed releasing texture objects, which could cause guest crashes 


If I have the time sure I will do it. But again, I'm working...
17) Questions and Answers : Windows : All vBox WU in error (Message 43979)
Posted 22 Dec 2020 by vigilian
Post:
Didn't got the time to put boinc on exclusive 100% but 100% cpu time is set up for 2 days now it didn't change a thing.

What I had the time to do this morning is giving 25 minutes of my time to watch another task to fail without doing anything. 25 minutes is the exact amount of time that needs the project to fail EACH TIME. Not from time to time, not sometimes, EACH TIME.

so let's review it together shall we:

22/12/2020 09:56:06 | GPUGRID | No tasks sent
22/12/2020 09:56:06 | GPUGRID | This computer has reached a limit on tasks in progress
22/12/2020 09:56:06 | GPUGRID | Project requested delay of 31 seconds
22/12/2020 10:04:42 |  | Suspending GPU computation - computer is in use
22/12/2020 10:10:19 | LHC@home | Computation for task CMS_472945_1608624847.991143_0 finished
22/12/2020 10:11:44 | LHC@home | Sending scheduler request: To report completed tasks.
22/12/2020 10:11:44 | LHC@home | Reporting 1 completed tasks
22/12/2020 10:11:44 | LHC@home | Requesting new tasks for CPU
22/12/2020 10:11:46 | LHC@home | Scheduler request completed: got 1 new tasks
22/12/2020 10:11:46 | LHC@home | Project requested delay of 6 seconds
22/12/2020 10:11:48 | LHC@home | Starting task CMS_484666_1608626650.023215_0
22/12/2020 10:13:02 | World Community Grid | Computation for task MIP1_00327339_0391_0 finished
22/12/2020 10:13:04 | World Community Grid | Started upload of MIP1_00327339_0391_0_r652393296_0
22/12/2020 10:13:08 | World Community Grid | Finished upload of MIP1_00327339_0391_0_r652393296_0
22/12/2020 10:14:04 | World Community Grid | project suspended by user
22/12/2020 10:14:06 | GPUGRID | project suspended by user
22/12/2020 10:14:07 | LHC@home | Sending scheduler request: To fetch work.
22/12/2020 10:14:07 | LHC@home | Requesting new tasks for CPU
22/12/2020 10:14:09 | LHC@home | Scheduler request completed: got 0 new tasks
22/12/2020 10:14:09 | LHC@home | No tasks sent
22/12/2020 10:14:09 | LHC@home | This computer has reached a limit on tasks in progress
22/12/2020 10:14:09 | LHC@home | Project requested delay of 6 seconds
22/12/2020 10:28:21 | LHC@home | Sending scheduler request: To fetch work.
22/12/2020 10:28:21 | LHC@home | Requesting new tasks for CPU
22/12/2020 10:28:22 | LHC@home | Scheduler request completed: got 0 new tasks
22/12/2020 10:28:22 | LHC@home | No tasks sent
22/12/2020 10:28:22 | LHC@home | This computer has reached a limit on tasks in progress
22/12/2020 10:28:22 | LHC@home | Project requested delay of 6 seconds
22/12/2020 10:29:04 |  | Resuming GPU computation
22/12/2020 10:29:12 |  | Suspending GPU computation - computer is in use
22/12/2020 10:32:19 |  | Resuming GPU computation
22/12/2020 10:32:24 |  | Suspending GPU computation - computer is in use
22/12/2020 10:37:30 | LHC@home | Computation for task CMS_484666_1608626650.023215_0 finished
22/12/2020 10:38:39 | LHC@home | Sending scheduler request: To report completed tasks.
22/12/2020 10:38:39 | LHC@home | Reporting 1 completed tasks
22/12/2020 10:38:39 | LHC@home | Requesting new tasks for CPU
22/12/2020 10:38:41 | LHC@home | Scheduler request completed: got 1 new tasks
22/12/2020 10:38:41 | LHC@home | Project requested delay of 6 seconds
22/12/2020 10:38:43 | LHC@home | Starting task CMS_482381_1608626349.863761_0
22/12/2020 10:38:51 | LHC@home | Sending scheduler request: To fetch work.
22/12/2020 10:38:51 | LHC@home | Requesting new tasks for CPU
22/12/2020 10:38:52 | LHC@home | Scheduler request completed: got 0 new tasks
22/12/2020 10:38:52 | LHC@home | No tasks sent
22/12/2020 10:38:52 | LHC@home | This computer has reached a limit on tasks in progress
22/12/2020 10:38:52 | LHC@home | Project requested delay of 6 seconds


as you can see I've been patient enough to not disturb the process.
there is no computation suspending whatsoever, I even suspended the other projects.
this is I guess the task in question :
https://lhcathome.cern.ch/lhcathome/result.php?resultid=292553098


and I'm sorry guys but when a job is badly done I'm just mad and I'm working this field since I'm 18 years old and that's a long time for someone of my age.


and THIS is ABNORMAL:
292553098 	150175186 	10670359 	22 Dec 2020, 9:11:45 UTC 	22 Dec 2020, 9:38:40 UTC 	Error while computing 	1,528.14 	20.84 	--- 	CMS Simulation v50.00 (vbox64)
windows_x86_64
292551833 	150174199 	10670359 	22 Dec 2020, 8:42:43 UTC 	22 Dec 2020, 9:11:45 UTC 	Error while computing 	1,528.12 	22.95 	--- 	CMS Simulation v50.00 (vbox64)
windows_x86_64
292550046 	150173024 	10670359 	22 Dec 2020, 8:11:59 UTC 	22 Dec 2020, 8:42:42 UTC 	Error while computing 	1,528.18 	22.27 	--- 	CMS Simulation v50.00 (vbox64)
windows_x86_64
292550038 	150173016 	10670359 	22 Dec 2020, 7:45:07 UTC 	22 Dec 2020, 8:11:59 UTC 	Error while computing 	1,508.77 	20.88 	--- 	CMS Simulation v50.00 (vbox64)
windows_x86_64
292549266 	150172620 	10670359 	22 Dec 2020, 7:11:39 UTC 	22 Dec 2020, 7:45:07 UTC 	Error while computing 	1,528.54 	22.45 	--- 	CMS Simulation v50.00 (vbox64)
windows_x86_64
292548664 	150172248 	10670359 	22 Dec 2020, 6:44:30 UTC 	22 Dec 2020, 7:11:39 UTC 	Error while computing 	1,528.53 	21.52 	--- 	CMS Simulation v50.00 (vbox64)
windows_x86_64
292541108 	150167394 	10670359 	22 Dec 2020, 6:17:34 UTC 	22 Dec 2020, 6:44:30 UTC 	Error while computing 	1,529.67 	24.64 	--- 	CMS Simulation v50.00 (vbox64)
windows_x86_64
292546876 	150171154 	10670359 	22 Dec 2020, 5:49:51 UTC 	22 Dec 2020, 6:17:34 UTC 	Error while computing 	1,527.65 	23.44 	--- 	CMS Simulation v50.00 (vbox64)
windows_x86_64


and I have pages and pages of that.
So I don't want to hear anything more about: "yeah but you know you've put an executioncap, but you don't have enough memory, but blahblahblah"

That much of an occurrence with that much of a precise momentum for each of the task, well no that's clearly not hazardous, and since no one wants to answer me about whether it's an eval and not accurate, well I will take that as a "no it's not an eval, it's accurate cpu time processing".

There is something wrong with those CMS VMs. And I clearly don't care whether it's working to the few hundreds, maybe thousands of regular participants
because let's be honnest, we are very selective club here between those in IT and dev who actually knows something about the seti-program and who came all the way to BOINC then CERN project.... and I even know belgian physicists (more than one from various ages) who actually DON'T KNOW the CERN project around BOINC even they actually use participate in LHC simulation.
and that is not working for a few minorities. If it doesn't work everywhere, then there is a problem in the programming that's it. And it has always been like that. Whether it's a lack of conditions in the programming to prevent something from happening or whether it's a lack of error management.


and yes crystal pellet I'm a bit harsh, even so I recognized earlier that you were the only one who were not on a loop (which was a compliment by the way). But it's like telling to a customer that he's crazy that what he's telling you is happening, doesn't actually happen. That's the same kind of idiotic behavior or nonsense. You start by believing the customer of what's happening and by ask yourself if the product can actually handle that situation and you start by revising your code not the other way around.

So just let's assume for once that I'm right.

The few possibilities here are:
- the CERN vm can't handle some of the VBoxManage parameters which is supposed to actually support based on BOINC documentation.
- there is a third program interfering with the mounting of shared folders of that specific VM for whatever reason there is.
- there is a bug in virtualbox 6.1.16
- there is a bug in the OS version that use CERN vm or something else from windows 10 in correlation to the CERN guest OS.

either way the log system in those VM is insufficient and incomplete. And it can't be a corruption of the VM because if it was that, it wouldn't start and I definitely reset the project several times already. Nor it is a virtualbox installation problem because I would be overwhelmed by problems on my numerous other VMs or other errors. And anyway I already reinstalled virtualbox more than once.

Every project are ticked in that school profile in LHC preferences. So I should receive every project but I only receives CMS and sixtrack(from time to time). So I will untick CMS to see if I'm receiving theory project but I highly doubt so. I would have received some by now.

And I literally don't have time to play with BOINC parameters all day. Devs here or moderators should already be grateful that someone is willingly able to put that much of energy to actually test the project, write back here and make extensive testing and try to explain to them how they are wrong and in loop.


A few remarks though,
While I was waiting this task to fail,
I was doing some maintenance on a ubuntu VM 20.10 and I mentionning this for a very precise reason. I restarted several times this VM and as you can see there hasn't been one single computation stopped => which proves that the start of a VM doesn't need 65% of a 3700X AMD cpu to start. Which in itself means that gunde was wrong by saying, yeah it's certainly the cpu cap blahblahblah. No it is not. And the computation stops at the start of CMS tasks (some of them not all of them), is not normal either, especially since those VM aren't as power hungry than a ubuntu 20.10 with GUI activated.

And before crystal pellet is saying anything about, "yeah but you used several VMs at once" Well there was only one CMS task running at the time. What you've seen yesterday was a result of me changing of profile from home to school and I didn't notice at the time that the parameter from school was at 2 tasks at once. I changed it during that day to 1. But yes sorry, but I'm working guys and my VMs need to be up at all times. Which shouldn't be a problem whatsoever. Since when are we doubting virtualbox, which is working for millions of customers, to handle several VMs at once? It can only mean one thing, again that there is a problem INSIDE the CERN vm or a problem in correlation to that specific VMs not something else.
18) Questions and Answers : Windows : All vBox WU in error (Message 43969)
Posted 21 Dec 2020 by vigilian
Post:
And yo udon't find it odd either that each time it's going into calculation error at the same % in boinc which is the exact same time of cpu stats on the task itself and also at the exact same line mounting shared directory ?
It doesn't strike you as a problem? At any moment?

There is a problem with those VM and the interaction in windows. That's the real problem, whether it's an interaction with a third party program or something else and that's probably why it can't find it in the shared directory, since I guess this is where the heartbeat file is (correct?), and so that's resulting into a calculation error.

Just to continue on a common sense note, don't forget that I'm giving time into writing this too. I could simply consider this project buggy and just deleted it. So in place, again, of giving the same bullshit argument that doesn't have any grip in reality (like yeaaaah SSD doesn't improve in any way access disk right ^^), maybe just maybe look at the stats of the VM and the actual surrounding elements of the tasks and not only the message that an inside OS guest have as information (which are by essence incomplete).

What would actually be the smart way to give me advice is to point me to the ticket system for those VM so that I can fill a ticket so that we can actually look into this more deeply with maybe another way to collect accurate logs on what's happening here.
19) Questions and Answers : Windows : All vBox WU in error (Message 43968)
Posted 21 Dec 2020 by vigilian
Post:
Well you have maybe XEON cpu with 64 cores or maybe you only do word processing things on your hosts but I don't .

This is not about my hardware or what they processing. It have no affect solving issue here.

But anyway your arguments doesn't have any grip on reality. Why? Because for the last 6 hours, they haven't been enough paused and resumed processes and the VM from CERN could used 100% of the cpu time and still all the tasks have resulted in error and more than a few of them didn't had any pauses.


2020-12-20 09:40:29 (48180): VM state change detected. (old = 'Running', new = 'Paused')
2020-12-20 09:40:39 (48180): VM state change detected. (old = 'Paused', new = 'Running')
2020-12-20 09:51:31 (48180): VM state change detected. (old = 'Running', new = 'Paused')
2020-12-20 09:51:41 (48180): VM state change detected. (old = 'Paused', new = 'Running')
2020-12-20 09:53:14 (48180): Guest Log: [INFO] Mounting the shared directory


2020-12-20 09:54:02 (48180): VM Heartbeat file specified, but missing.
2020-12-20 09:54:02 (48180): VM Heartbeat file specified, but missing file system status. (errno = '2') 


And vm still limit on last task.

2020-12-20 09:33:39 (48180): Preference change detected
2020-12-20 09:33:39 (48180): Setting CPU throttle for VM. (65%)
2020-12-20 09:33:39 (48180): Setting checkpoint interval to 3600 seconds. (Higher value of (Preference: 3600 seconds) or (Vbox_job.xml: 600 seconds))


So in place again, of pointing out the habits or the system of the people trying to help the community, just use some common sense please and see that there is too much of the same occurrence that it could be the cpu cap.


You have set to throttle cpu and vm machines would have hard to handle it. You could set it back to 100%.
It is up to you. I can't help you if your not open to change to default settings.


Yes and you are not listening. Because Again I don't need this project and I don't care if it works or not. It's actually me helping the project not the other way around. That's the common sense you should actually have.
As I said I can't put 100% of the CPU allocated to BOINC. That would be non sense. And a small remark that I can give you by the way. Putting a 100% cpu on project like that clogged the system anyway because you are not giving enough space for other tasks like windows background task to breathe. Because when you put 25% + 65 % it's equal to 90% and let suppose that I put no restrictions what happens then? Well, what happens it's that you got an unresponsive system. And it has been this way since I first try to help this project when the i7-2600k has been launched. That's a reality fact whether you like it or not.

Crystall pellet is the only one who has not his story on loop.

Well the message from the tasks are not accurate do you know why?

20/12/2020 08:54:46 | LHC@home | Requesting new tasks for CPU
20/12/2020 08:54:48 | LHC@home | Scheduler request completed: got 1 new tasks
20/12/2020 08:54:48 | LHC@home | Project requested delay of 6 seconds
20/12/2020 09:03:01 |  | Suspending computation - CPU is busy
20/12/2020 09:03:11 |  | Resuming computation
20/12/2020 09:07:24 | LHC@home | Computation for task CMS_3501742_1608447092.058199_0 finished
20/12/2020 09:07:24 | LHC@home | Starting task CMS_3496799_1608446491.320197_0
20/12/2020 09:08:35 | LHC@home | Sending scheduler request: To report completed tasks.
20/12/2020 09:08:35 | LHC@home | Reporting 1 completed tasks
20/12/2020 09:08:35 | LHC@home | Requesting new tasks for CPU
20/12/2020 09:08:37 | LHC@home | Scheduler request completed: got 1 new tasks
20/12/2020 09:08:37 | LHC@home | Project requested delay of 6 seconds
20/12/2020 09:14:40 | LHC@home | Computation for task CMS_3511089_1608448592.981055_0 finished
20/12/2020 09:14:40 | LHC@home | Starting task CMS_3484353_1608444690.176501_0
20/12/2020 09:16:03 | LHC@home | Sending scheduler request: To report completed tasks.
20/12/2020 09:16:03 | LHC@home | Reporting 1 completed tasks
20/12/2020 09:16:03 | LHC@home | Requesting new tasks for CPU
20/12/2020 09:16:05 | LHC@home | Scheduler request completed: got 1 new tasks
20/12/2020 09:16:05 | LHC@home | Project requested delay of 6 seconds
20/12/2020 09:18:50 | World Community Grid | Computation for task MIP1_00327156_1304_0 finished
20/12/2020 09:18:50 | LHC@home | Starting task CMS_3526052_1608450695.312753_0
20/12/2020 09:18:52 | World Community Grid | Started upload of MIP1_00327156_1304_0_r1188955442_0
20/12/2020 09:18:54 | World Community Grid | Finished upload of MIP1_00327156_1304_0_r1188955442_0
20/12/2020 09:19:21 | LHC@home | Computation for task CMS_3511101_1608448593.060255_0 finished
20/12/2020 09:19:21 | World Community Grid | Starting task MCM1_0169779_2766_1
20/12/2020 09:21:01 | LHC@home | Sending scheduler request: To report completed tasks.
20/12/2020 09:21:01 | LHC@home | Reporting 1 completed tasks
20/12/2020 09:21:01 | LHC@home | Requesting new tasks for CPU
20/12/2020 09:21:03 | LHC@home | Scheduler request completed: got 1 new tasks
20/12/2020 09:21:03 | LHC@home | Project requested delay of 6 seconds
20/12/2020 09:25:44 | GPUGRID | Sending scheduler request: Requested by project.
20/12/2020 09:25:44 | GPUGRID | Requesting new tasks for NVIDIA GPU
20/12/2020 09:25:46 | GPUGRID | Scheduler request completed: got 0 new tasks
20/12/2020 09:25:46 | GPUGRID | Project is temporarily shut down for maintenance
20/12/2020 09:25:46 | GPUGRID | Project requested delay of 3600 seconds
20/12/2020 09:33:09 | LHC@home | Computation for task CMS_3496799_1608446491.320197_0 finished
20/12/2020 09:33:09 | LHC@home | Starting task CMS_3513309_1608448893.228973_0
20/12/2020 09:35:03 | LHC@home | Sending scheduler request: To report completed tasks.
20/12/2020 09:35:03 | LHC@home | Reporting 1 completed tasks
20/12/2020 09:35:03 | LHC@home | Requesting new tasks for CPU
20/12/2020 09:35:05 | LHC@home | Scheduler request completed: got 1 new tasks
20/12/2020 09:35:05 | LHC@home | Project requested delay of 6 seconds
20/12/2020 09:40:26 | LHC@home | Computation for task CMS_3484353_1608444690.176501_0 finished
20/12/2020 09:40:26 | LHC@home | Starting task CMS_3523673_1608450395.107557_0
20/12/2020 09:40:28 |  | Suspending computation - CPU is busy
20/12/2020 09:40:38 |  | Resuming computation
20/12/2020 09:41:32 |  | Suspending GPU computation - computer is in use
20/12/2020 09:42:16 | LHC@home | Sending scheduler request: To report completed tasks.
20/12/2020 09:42:16 | LHC@home | Reporting 1 completed tasks
20/12/2020 09:42:16 | LHC@home | Requesting new tasks for CPU
20/12/2020 09:42:18 | LHC@home | Scheduler request completed: got 1 new tasks
20/12/2020 09:42:18 | LHC@home | Project requested delay of 6 seconds
20/12/2020 09:44:34 | LHC@home | Computation for task CMS_3526052_1608450695.312753_0 finished
20/12/2020 09:45:33 | LHC@home | update requested by user
20/12/2020 09:45:34 | LHC@home | Sending scheduler request: Requested by user.
20/12/2020 09:45:34 | LHC@home | Reporting 1 completed tasks
20/12/2020 09:45:34 | LHC@home | Requesting new tasks for CPU
20/12/2020 09:45:35 | LHC@home | Scheduler request completed: got 0 new tasks
20/12/2020 09:45:35 | LHC@home | No tasks sent
20/12/2020 09:45:35 | LHC@home | This computer has reached a limit on tasks in progress
20/12/2020 09:45:35 | LHC@home | Project requested delay of 6 seconds
20/12/2020 09:51:29 |  | Suspending computation - CPU is busy
20/12/2020 09:51:39 |  | Resuming computation
20/12/2020 09:59:17 | LHC@home | Computation for task CMS_3513309_1608448893.228973_0 finished
20/12/2020 09:59:17 | LHC@home | Starting task CMS_3515639_1608449193.816402_0
20/12/2020 10:00:42 | LHC@home | Sending scheduler request: To report completed tasks.
20/12/2020 10:00:42 | LHC@home | Reporting 1 completed tasks
20/12/2020 10:00:42 | LHC@home | Requesting new tasks for CPU
20/12/2020 10:00:45 | LHC@home | Scheduler request completed: got 0 new tasks

Because that's the accurate messages.
Which means that the VMs have 11 minutes to be restored.... from an SSD.... And you don't find it odd? not the slightest? So a VM is telling you that it has between 40:39 and 51:31 10 minutes 52sec to restore itself which is longer than any VM in the world to boot or restore from a paused status and you don't see the problem? really?
What does it do during 10 minutes? Because it sure is launched according to BOINC itself and the logs in it.

And you don't find it odd either that each time the VM needs to be started, the computation has to be paused even when nothing else is running? So you actually telling me that an AMD 3700X with 16 cores has to use more than 65% of it's overall ressources to launch this small VM which is again, not the heaviest VM in the world, it has no GUI, it's a headless server with practically no drivers to load, nothing.... And for you it's normal? like every day as usual? Seriously?

Plus I've asked 2 precise questions here:


what is the cpu time in the stats task on the website and the other stat? Because they are very close. So unless this is also an estimation, this is also taking place at night while there is only windows services activities, and the other VM are not doing munch either. So on a common sense note, I think we can both agree this is not normal correct?


So in place of being on loop just answer those questions precisely.
20) Questions and Answers : Windows : All vBox WU in error (Message 43954)
Posted 20 Dec 2020 by vigilian
Post:
Well you have maybe XEON cpu with 64 cores or maybe you only do word processing things on your hosts but I don't .
Plus strangely I don't have those problems with VM which are way more higher in demands in terms of horsepower, which are paused, resume, and with cpu execution cap also and there is no problem with them. So maybe as you said those VM are fragile, but if they are then there is a terrible problem with the guest OS which are not correctly optimized.

But anyway your arguments doesn't have any grip on reality. Why? Because for the last 6 hours, they haven't been enough paused and resumed processes and the VM from CERN could used 100% of the cpu time and still all the tasks have resulted in error and more than a few of them didn't had any pauses.

So in place again, of pointing out the habits or the system of the people trying to help the community, just use some common sense please and see that there is too much of the same occurrence that it could be the cpu cap.


Next 20


©2022 CERN