Message boards : ATLAS application : Atlas apparently affecting File Manager operation
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Dark Angel
Avatar

Send message
Joined: 7 Aug 11
Posts: 62
Credit: 21,011,361
RAC: 8,913
Message 44624 - Posted: 30 Mar 2021, 21:03:36 UTC

Ok this problem is strange so I will try my best to describe what's happening.
I'll start by saying that there have been no errors in any logs I can find, certainly none in std_err
The machine: Linux Mint 20, X86_64, Ryzen 2700X with 32GB RAM. Always updated. Running using 95% of available cores (though changing this has made no difference) OS is on a M2 SSD, CVMFS mounts are on a separate drive to reduce writes (but were originally on the SSD, changing this has not changed the issue)
When I first start running Atlas Native (from a fresh reboot) there are no issues. Tasks complete without error and I can do my normal thing while they're running without issue.
After a variable amount of time, usually a couple of days continuous running, my file manager becomes slower and slower to open. Eventually it can take a couple of minutes to open and be responsive. The Rubbish folder/trash/recycle will become unresponsive and refuse to open, eventually changing it's icon to an error symbol and returning a time-out error.
At no point are direct file operations by other application affected unless they use the file manager for load/save operations in which case it is affected the same way.
Opening applications is unaffected.
File access from the terminal is unaffected.
If I open the file manager from the terminal it is affected the same way but if I open it using "sudo" it is unaffected so the issue is particular to my profile.
Note: only the file manager (currently Thunar but I have tried others with the same result) and Rubbish Bin are affected, no other file access is affected in any way I can see.
I have TRIM enabled as a cron job, this has not changed the issue either way.
If I reboot the machine all operations return to normal. If I continue to run Atlas the problem returns. If I switch to a different BOINC project (I tried ODLK, TN-Grid, WCG, and Asteroids) the problem does not occur, so I am convinced it is something particular to Atlas, but have no idea what.

Any help would be greatly appreciated.
ID: 44624 · Report as offensive     Reply Quote
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2386
Credit: 222,904,691
RAC: 137,959
Message 44641 - Posted: 2 Apr 2021, 6:31:14 UTC - in response to Message 44624.  

You may post the output of the following commands:
cvmfs_config showconfig -s atlas.cern.ch
mount
free (run this when the slow down happens)



Beside that some questions

Do you (periodically) run a script, e.g. a backup, that scans through the CVMFS mount point?
Do/Did you look into the CVMFS directories using your file manager and does the file manager remember the place where you leave the directory tree?
ID: 44641 · Report as offensive     Reply Quote
Dark Angel
Avatar

Send message
Joined: 7 Aug 11
Posts: 62
Credit: 21,011,361
RAC: 8,913
Message 44643 - Posted: 2 Apr 2021, 12:58:53 UTC - in response to Message 44641.  
Last modified: 2 Apr 2021, 13:10:24 UTC

I removed and reinstalled cvmfs since posting, so I will have to see if it continues to happen. If it does I'll post the output of those commands.

I did have a daily AV scan running. I have just removed it. There were no other cron jobs running. I have now whitelisted the cvmfs scratch directory tree to be sure.
I typically do not look into the cvmfs directories unless I miss-click as they do show up as unmounted drives in my file manager. I have changed the thumbnailing setting in the file manager to "local files only".

I am now looking at an issue where the work units are not suspending when instructed. The BOINC manager shows the work as suspended but the results from top and the CPU core temperatures say otherwise.
ID: 44643 · Report as offensive     Reply Quote
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2386
Credit: 222,904,691
RAC: 137,959
Message 44644 - Posted: 2 Apr 2021, 13:34:14 UTC - in response to Message 44643.  

... daily AV scan running

Including the CVMFS mount point?
Must not be done.





... work units are not suspending when instructed.

ATLAS does not support suspend/resume.
Whenever you try it, restart BOINC or reboot the task will drop all work and start from the scratch.
ID: 44644 · Report as offensive     Reply Quote
Dark Angel
Avatar

Send message
Joined: 7 Aug 11
Posts: 62
Credit: 21,011,361
RAC: 8,913
Message 44648 - Posted: 3 Apr 2021, 3:46:59 UTC - in response to Message 44644.  

... daily AV scan running

Including the CVMFS mount point?
Must not be done.

I don't recall seeing that anywhere in the setup instructions. Regardless, I've whitelisted the mount point now.




... work units are not suspending when instructed.

ATLAS does not support suspend/resume.
Whenever you try it, restart BOINC or reboot the task will drop all work and start from the scratch.


I mean that when the BOINC manager is told to suspend computation, either manually or when starting an Exclusive Application, the tasks do not stop. They continue running in the background. To stop them and free up the CPU for other work I have to completely shut down the boinc-client service.
ID: 44648 · Report as offensive     Reply Quote
Dark Angel
Avatar

Send message
Joined: 7 Aug 11
Posts: 62
Credit: 21,011,361
RAC: 8,913
Message 44649 - Posted: 3 Apr 2021, 5:41:49 UTC - in response to Message 44641.  
Last modified: 3 Apr 2021, 5:42:15 UTC

$ cvmfs_config showconfig -s atlas.cern.ch
CVMFS_REPOSITORY_NAME=atlas.cern.ch
CVMFS_BACKOFF_INIT=2 # from /etc/cvmfs/default.conf
CVMFS_BACKOFF_MAX=10 # from /etc/cvmfs/default.conf
CVMFS_BASE_ENV=1 # from /etc/cvmfs/default.conf
CVMFS_CACHE_BASE=/home/michael/Downloads/scratch/cvmfs # from /etc/cvmfs/default.conf
CVMFS_CACHE_DIR=/home/michael/Downloads/scratch/cvmfs/shared
CVMFS_CHECK_PERMISSIONS=yes # from /etc/cvmfs/default.conf
CVMFS_CLAIM_OWNERSHIP=yes # from /etc/cvmfs/default.conf
CVMFS_CLIENT_PROFILE= # from /etc/cvmfs/default.conf
CVMFS_CONFIG_REPO_DEFAULT_ENV=1 # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_CONFIG_REPOSITORY=cvmfs-config.cern.ch # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_DEFAULT_DOMAIN=cern.ch # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_FALLBACK_PROXY='http://cvmfsbproxy.cern.ch:3126;http://cvmfsbproxy.fnal.gov:3126' # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_HOST_RESET_AFTER=1800 # from /etc/cvmfs/default.conf
CVMFS_HTTP_PROXY='http://192.168.1.3:3128;DIRECT' # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_KCACHE_TIMEOUT=2 # from /etc/cvmfs/default.local
CVMFS_KEYS_DIR=/cvmfs/cvmfs-config.cern.ch/etc/cvmfs/keys/cern.ch # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_LOW_SPEED_LIMIT=1024 # from /etc/cvmfs/default.conf
CVMFS_MAX_RETRIES=3 # from /etc/cvmfs/default.local
CVMFS_MOUNT_DIR=/cvmfs # from /etc/cvmfs/default.conf
CVMFS_NFILES=131072 # from /etc/cvmfs/default.conf
CVMFS_PAC_URLS='http://grid-wpad/wpad.dat;http://wpad/wpad.dat;http://cernvm-wpad.fnal.gov/wpad.dat;http://cernvm-wpad.cern.ch/wpad.dat' # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_PROXY_RESET_AFTER=300 # from /etc/cvmfs/default.conf
CVMFS_QUOTA_LIMIT=4000 # from /etc/cvmfs/default.conf
CVMFS_RELOAD_SOCKETS=/var/run/cvmfs # from /etc/cvmfs/default.conf
CVMFS_REPOSITORIES=atlas,atlas-condb,grid,cernvm-prod,sft,alice # from /etc/cvmfs/default.local
CVMFS_SEND_INFO_HEADER=yes # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_SERVER_URL='http://s1cern-cvmfs.openhtc.io/cvmfs/atlas.cern.ch;http://s1ral-cvmfs.openhtc.io/cvmfs/atlas.cern.ch;http://s1bnl-cvmfs.openhtc.io/cvmfs/atlas.cern.ch;http://s1fnal-cvmfs.openhtc.io/cvmfs/atlas.cern.ch;http://s1unl-cvmfs.openhtc.io/cvmfs/atlas.cern.ch;http://s1asgc-cvmfs.openhtc.io:8080/cvmfs/atlas.cern.ch;http://s1ihep-cvmfs.openhtc.io/cvmfs/atlas.cern.ch' # from /etc/cvmfs/domain.d/cern.ch.local
CVMFS_SHARED_CACHE=yes # from /etc/cvmfs/default.conf
CVMFS_STRICT_MOUNT=no # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT=5 # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT_DIRECT=10 # from /etc/cvmfs/default.conf
CVMFS_USE_CDN=yes # from /etc/cvmfs/default.local
CVMFS_USE_GEOAPI=yes # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_USER=cvmfs # from /etc/cvmfs/default.conf

$mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,noexec,relatime,size=16367192k,nr_inodes=4091798,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=3289672k,mode=755)
/dev/nvme0n1p6 on / type ext4 (rw,relatime,errors=remount-ro)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=28,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=4899)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
tracefs on /sys/kernel/debug/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
/dev/nvme0n1p1 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
/dev/sda1 on /home/michael/Downloads type ext4 (rw,relatime)
/dev/sdc1 on /home/michael/Steam_Library type ext4 (rw,relatime)
/dev/sdb1 on /home/michael/media type ext4 (rw,relatime,stripe=32750)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=3289668k,mode=700,uid=1000,gid=1000)
gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
/etc/auto.misc on /misc type autofs (rw,relatime,fd=6,pgrp=1992532,timeout=300,minproto=5,maxproto=5,indirect,pipe_ino=16391084)
-hosts on /net type autofs (rw,relatime,fd=12,pgrp=1992532,timeout=300,minproto=5,maxproto=5,indirect,pipe_ino=16389736)
/etc/auto.cvmfs on /cvmfs type autofs (rw,relatime,fd=18,pgrp=1992532,timeout=300,minproto=5,maxproto=5,indirect,pipe_ino=16391089)
cvmfs2 on /cvmfs/cvmfs-config.cern.ch type fuse (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
cvmfs2 on /cvmfs/atlas.cern.ch type fuse (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
cvmfs2 on /cvmfs/atlas-condb.cern.ch type fuse (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
cvmfs2 on /cvmfs/sft.cern.ch type fuse (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)

~$ free
total used free shared buff/cache available
Mem: 32896692 7766904 16915556 240416 8214232 24446640
Swap: 35163132 279904 34883228

I have not looked into the cvmfs mounts
Virus scanning has been removed from the schedule and the mounts whitelisted
I have no cron jobs listed at all
ID: 44649 · Report as offensive     Reply Quote
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2386
Credit: 222,904,691
RAC: 137,959
Message 44650 - Posted: 3 Apr 2021, 12:02:24 UTC - in response to Message 44649.  

Your setting:
CVMFS_CACHE_BASE=/home/michael/Downloads/scratch/cvmfs # from /etc/cvmfs/default.conf

Default would be:
CVMFS_CACHE_BASE=/var/lib/cvmfs    # from /etc/cvmfs/default.conf

Why did you change the default setting?
CVMFS runs under a special system account "cvmfs".
Now this account requires write access to a directory below /home/michael.
Not a good setup.


Why did you modify /etc/cvmfs/default.conf?
The comments from /etc/cvmfs/default.conf start with a clear requirement:
# Don't edit here.  Create /etc/cvmfs/default.local.

This is to avoid that updates overwrite your local settings.
Hence, place them at least in default.local or another .local file of the hierarchy.


Beside /cvmfs the directory set by CVMFS_CACHE_BASE should be excluded from your AV scans.
This is for performance reasons.
The integrity of the files is ensured by krypto keys.


In general it's not recommended to include a remote filesystem (NFS, CVMFS, ...) in an AV scan.
AV scans should be the task of the system the data is stored on.
If you scan those files locally you may initiate lots of network transfers.
In case of CVMFS this can be a few kB (cvmfs-config.cern.ch) but easily hundreds of GB.


output of "mount"
looks OK

output of "free"
looks OK, except that used swap seems to be a bit high.
This is OK as long as the system has enough RAM to "play" with (In this case 8 GB cache).
ID: 44650 · Report as offensive     Reply Quote
Dark Angel
Avatar

Send message
Joined: 7 Aug 11
Posts: 62
Credit: 21,011,361
RAC: 8,913
Message 44651 - Posted: 3 Apr 2021, 21:51:42 UTC - in response to Message 44650.  
Last modified: 3 Apr 2021, 21:53:50 UTC

The cvmfs directory was moved to a mechanical drive in response to the file manager problem. Previously it was running in the default location which is on an M.2 SSD.

As mentioned, AV scanning has been completely stopped already before the most recent lock-up.
ID: 44651 · Report as offensive     Reply Quote
Dark Angel
Avatar

Send message
Joined: 7 Aug 11
Posts: 62
Credit: 21,011,361
RAC: 8,913
Message 44666 - Posted: 7 Apr 2021, 0:04:43 UTC

Since it happened again (previous post) I have stopped work on the project until I can get this worked out. As I'm studying I need access to my files system in a timely manner and this problem precludes that. I haven't had an issue with the work units themselves.

I notice there has been an update to cvmfs come through the repositories today. It would be good if this corrects the issue.
ID: 44666 · Report as offensive     Reply Quote
Dark Angel
Avatar

Send message
Joined: 7 Aug 11
Posts: 62
Credit: 21,011,361
RAC: 8,913
Message 44731 - Posted: 13 Apr 2021, 21:32:27 UTC

Problem is still occurring.
All cvmfs settings were returned to default after the last post.

$ cvmfs_config showconfig -s atlas.cern.ch
CVMFS_REPOSITORY_NAME=atlas.cern.ch
CVMFS_BACKOFF_INIT=2 # from /etc/cvmfs/default.conf
CVMFS_BACKOFF_MAX=10 # from /etc/cvmfs/default.conf
CVMFS_BASE_ENV=1 # from /etc/cvmfs/default.conf
CVMFS_CACHE_BASE=/scratch/cvmfs # from /etc/cvmfs/default.local
CVMFS_CACHE_DIR=/scratch/cvmfs/shared
CVMFS_CHECK_PERMISSIONS=yes # from /etc/cvmfs/default.conf
CVMFS_CLAIM_OWNERSHIP=yes # from /etc/cvmfs/default.conf
CVMFS_CLIENT_PROFILE= # from /etc/cvmfs/default.conf
CVMFS_CONFIG_REPO_DEFAULT_ENV=1 # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_CONFIG_REPOSITORY=cvmfs-config.cern.ch # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_DEFAULT_DOMAIN=cern.ch # from /etc/cvmfs/default.d/50-cern-debian.conf
CVMFS_FALLBACK_PROXY='http://cvmfsbproxy.cern.ch:3126;http://cvmfsbproxy.fnal.gov:3126' # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_HOST_RESET_AFTER=1800 # from /etc/cvmfs/default.conf
CVMFS_HTTP_PROXY='http://192.168.1.3:3128;DIRECT' # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_KCACHE_TIMEOUT=2 # from /etc/cvmfs/default.local
CVMFS_KEYS_DIR=/cvmfs/cvmfs-config.cern.ch/etc/cvmfs/keys/cern.ch # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_LOW_SPEED_LIMIT=1024 # from /etc/cvmfs/default.conf
CVMFS_MAX_RETRIES=3 # from /etc/cvmfs/default.local
CVMFS_MOUNT_DIR=/cvmfs # from /etc/cvmfs/default.conf
CVMFS_NFILES=131072 # from /etc/cvmfs/default.conf
CVMFS_PAC_URLS='http://grid-wpad/wpad.dat;http://wpad/wpad.dat;http://cernvm-wpad.fnal.gov/wpad.dat;http://cernvm-wpad.cern.ch/wpad.dat' # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/default.conf
CVMFS_PROXY_RESET_AFTER=300 # from /etc/cvmfs/default.conf
CVMFS_QUOTA_LIMIT=4096 # from /etc/cvmfs/default.local
CVMFS_RELOAD_SOCKETS=/var/run/cvmfs # from /etc/cvmfs/default.conf
CVMFS_REPOSITORIES=atlas,atlas-condb,grid,cernvm-prod,sft,alice # from /etc/cvmfs/default.local
CVMFS_SEND_INFO_HEADER=yes # from /cvmfs/cvmfs-config.cern.ch/etc/cvmfs/domain.d/cern.ch.conf
CVMFS_SERVER_URL='http://s1cern-cvmfs.openhtc.io/cvmfs/atlas.cern.ch;http://s1ral-cvmfs.openhtc.io/cvmfs/atlas.cern.ch;http://s1bnl-cvmfs.openhtc.io/cvmfs/atlas.cern.ch;http://s1fnal-cvmfs.openhtc.io/cvmfs/atlas.cern.ch;http://s1unl-cvmfs.openhtc.io/cvmfs/atlas.cern.ch;http://s1asgc-cvmfs.openhtc.io:8080/cvmfs/atlas.cern.ch;http://s1ihep-cvmfs.openhtc.io/cvmfs/atlas.cern.ch' # from /etc/cvmfs/domain.d/cern.ch.local
CVMFS_SHARED_CACHE=yes # from /etc/cvmfs/default.conf
CVMFS_STRICT_MOUNT=no # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT=5 # from /etc/cvmfs/default.conf
CVMFS_TIMEOUT_DIRECT=10 # from /etc/cvmfs/default.conf
CVMFS_USE_CDN=yes # from /etc/cvmfs/default.local
CVMFS_USE_GEOAPI=yes # from /etc/cvmfs/domain.d/cern.ch.conf
CVMFS_USER=cvmfs # from /etc/cvmfs/default.conf

$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,noexec,relatime,size=16367192k,nr_inodes=4091798,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=3289668k,mode=755)
/dev/nvme0n1p6 on / type ext4 (rw,relatime,errors=remount-ro)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=28,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=20181)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
tracefs on /sys/kernel/debug/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
/dev/nvme0n1p1 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
/dev/sda1 on /home/michael/Downloads type ext4 (rw,relatime)
/dev/sdb1 on /home/michael/media type ext4 (rw,relatime,stripe=32750)
/dev/sdc1 on /home/michael/Steam_Library type ext4 (rw,relatime)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=3289668k,mode=700,uid=1000,gid=1000)
gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
/etc/auto.misc on /misc type autofs (rw,relatime,fd=6,pgrp=2110,timeout=300,minproto=5,maxproto=5,indirect,pipe_ino=55395)
-hosts on /net type autofs (rw,relatime,fd=12,pgrp=2110,timeout=300,minproto=5,maxproto=5,indirect,pipe_ino=39835)
/etc/auto.cvmfs on /cvmfs type autofs (rw,relatime,fd=18,pgrp=2110,timeout=300,minproto=5,maxproto=5,indirect,pipe_ino=44963)
cvmfs2 on /cvmfs/cvmfs-config.cern.ch type fuse (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
cvmfs2 on /cvmfs/atlas.cern.ch type fuse (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
cvmfs2 on /cvmfs/sft.cern.ch type fuse (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
cvmfs2 on /cvmfs/atlas-condb.cern.ch type fuse (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
cvmfs2 on /cvmfs/grid.cern.ch type fuse (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
cvmfs2 on /cvmfs/cernvm-prod.cern.ch type fuse (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
cvmfs2 on /cvmfs/alice.cern.ch type fuse (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)

$ free
total used free shared buff/cache available
Mem: 32896680 4325232 4077124 274352 24494324 27926860
Swap: 35163132 7472 35155660
ID: 44731 · Report as offensive     Reply Quote
Dark Angel
Avatar

Send message
Joined: 7 Aug 11
Posts: 62
Credit: 21,011,361
RAC: 8,913
Message 46701 - Posted: 30 Apr 2022, 3:30:25 UTC

Have come back to this after a lay-off.
Problem has continued to occur.

I have found it is specific to the file manager running under the user account. Opening Thunar under the root account does not suffer the same problem when it happens.
ID: 46701 · Report as offensive     Reply Quote
Dark Angel
Avatar

Send message
Joined: 7 Aug 11
Posts: 62
Credit: 21,011,361
RAC: 8,913
Message 46743 - Posted: 5 May 2022, 1:48:14 UTC

This machine is now completely stock, yet the file manager issue is still occurring.
ID: 46743 · Report as offensive     Reply Quote
cuphi

Send message
Joined: 17 Jun 21
Posts: 12
Credit: 2,655,004
RAC: 0
Message 46744 - Posted: 5 May 2022, 2:09:09 UTC - in response to Message 46743.  

My openSuse boxes that run XFCE4 are not having the issue. Perhaps the LinuxMint snapshot utility is trying to capture all of CVMFS? That is really just a guess. I had issues with Mint failing tasks on my laptop so I switched it to openSuse like the rest of my network.
ID: 46744 · Report as offensive     Reply Quote
Dark Angel
Avatar

Send message
Joined: 7 Aug 11
Posts: 62
Credit: 21,011,361
RAC: 8,913
Message 46745 - Posted: 5 May 2022, 3:34:59 UTC
Last modified: 5 May 2022, 3:44:52 UTC

It's worth checking out at least.

edit: I don't have Timeshift installed
ID: 46745 · Report as offensive     Reply Quote
cuphi

Send message
Joined: 17 Jun 21
Posts: 12
Credit: 2,655,004
RAC: 0
Message 46747 - Posted: 5 May 2022, 4:13:05 UTC - in response to Message 46745.  

What about snapper?
ID: 46747 · Report as offensive     Reply Quote
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2386
Credit: 222,904,691
RAC: 137,959
Message 46748 - Posted: 5 May 2022, 4:38:55 UTC

cuphi wrote:
Perhaps the LinuxMint snapshot utility is trying to capture all of CVMFS?

I had a similar idea:
When Thunar starts it may try to read directories deeper in the tree.
In case of /cvmfs this can cause lots of requests to cvmfs catalogue files.
Those are usually downloaded from the internet to keep them as fresh as possible and they can be between a few and >100 MB each.

I'm not familiar with Thunar.
Perhaps it provides a setting to avoid this "deep tree inspection".

@Dark Angel
In case you have the cvmfs client configured to use your local squid you may "tail -f ..." squid's access.log.
Then start Thunar and watch the tail console for those catalogue requests.
ID: 46748 · Report as offensive     Reply Quote
cuphi

Send message
Joined: 17 Jun 21
Posts: 12
Credit: 2,655,004
RAC: 0
Message 46749 - Posted: 5 May 2022, 5:16:06 UTC - in response to Message 46748.  


When Thunar starts it may try to read directories deeper in the tree


As far as I am aware this is not the case. As I said, I don't have this issue with Thunar on openSuse. It has to be something that is unique to Linux Mint or Dark Angel's system. I don't know anything about the inner workings of CVMFS or or Singularity but I have been around Unix-like systems enough to know that deviating from the defaults is bad if you can't read the code. I am thinking that something was changed like file permissions, group membership, fstab entries, over-tuning of memory/filesystem options...it could be a wide assortment of things. Since we only have one person posting about it I really doubt it's baked into the code somewhere. I think it's much more likely to be a configuration problem. I could be wrong though. Linux Mint is a strange beast because of it's target audience.
ID: 46749 · Report as offensive     Reply Quote
Dark Angel
Avatar

Send message
Joined: 7 Aug 11
Posts: 62
Credit: 21,011,361
RAC: 8,913
Message 46750 - Posted: 5 May 2022, 5:33:16 UTC

It doesn't help that it's intermittent as well. The file manager (the application is Thunar but I access it using the XFCE Places toolbar applet) works fine most of the time, but after a period of time, which could be several days, it starts to slow down until it stops working. Sometimes I only notice it when the trash applet stops working.
ID: 46750 · Report as offensive     Reply Quote
cuphi

Send message
Joined: 17 Jun 21
Posts: 12
Credit: 2,655,004
RAC: 0
Message 46751 - Posted: 5 May 2022, 6:05:43 UTC - in response to Message 46750.  

Well, let's try the obvious.

$> sudo egrep -iw "warning|error|failure" /var/log/syslog


There may be lot of false positives in these results, but you have to start somewhere.
ID: 46751 · Report as offensive     Reply Quote
Dark Angel
Avatar

Send message
Joined: 7 Aug 11
Posts: 62
Credit: 21,011,361
RAC: 8,913
Message 46753 - Posted: 5 May 2022, 7:39:14 UTC - in response to Message 46751.  

I find a lot of stuff like this:

May 5 17:00:16 Zen cvmfs2: (sft-nightlies.cern.ch) switching host from http://s1ihep-cvmfs.openhtc.io/cvmfs/sft-nightlies.cern.ch to http://s1unl-cvmfs.openhtc.io/cvmfs/sft-nightlies.cern.ch (host returned HTTP error)
May 5 17:00:17 Zen cvmfs2: (sft-nightlies.cern.ch) switching host from http://s1unl-cvmfs.openhtc.io/cvmfs/sft-nightlies.cern.ch to http://s1fnal-cvmfs.openhtc.io/cvmfs/sft-nightlies.cern.ch (host returned HTTP error)
May 5 17:00:17 Zen cvmfs2: (sft-nightlies.cern.ch) switching host from http://s1fnal-cvmfs.openhtc.io/cvmfs/sft-nightlies.cern.ch to http://s1bnl-cvmfs.openhtc.io/cvmfs/sft-nightlies.cern.ch (host returned HTTP error)

and a few from the Tracker-Miner indexing service (which I've now uninstalled)

I'll wait and see if it happens again.
ID: 46753 · Report as offensive     Reply Quote
1 · 2 · Next

Message boards : ATLAS application : Atlas apparently affecting File Manager operation


©2024 CERN