21) Message boards : Theory Application : Theory native fails with \"mountpoint for cgroup not found\" (Message 47620)
Posted 26 Dec 2022 by wujj123456
Thanks for looking into this. There's a `/cvmfs/grid.cern.ch/vc/containers/runc.new` that also seems to work fine.

Perfect. That's an easier patch to maintain. Perhaps someone is aware of the problem and testing fix already. I can only hope a fix for everyone is coming soon.
22) Message boards : Theory Application : Theory native fails with \"mountpoint for cgroup not found\" (Message 47618)
Posted 26 Dec 2022 by wujj123456
Just to be sure, from what I can find, none of the LHC@Home application source code is open, right?

Turns out this question is irrelevant. This specific issue is only with the runc on cvmfs. The runc came with my distro had no problem starting containers on cgroupv2. So I hacked around in the cranky script that used to start native theory tasks and it now works without suspend/resume. Note that this is not tested in any other environments than my own, though it probably should work so long as the runc on distro can cope with cgroupv2.

I ran two tasks and they both finished fine:
Note the WARNING about runc in the output, which is what I added in the patch: https://pastebin.com/vpLvagEr. The link will expire in a week in case the patch has undesirable side effects. I can upload a permanent one if admin approve this. Hopefully just swapping the runc version doesn't have any side effects (like bogus results), but I'd like to get confirmation first.

For the real fix, we may not even need any patch, if we can upgrade the runc in cvmfs. I don't know if the one in cvmfs is forked, but it seems to be old for sure.
$ runc -v
runc version 1.1.0-0ubuntu1.1
spec: 1.0.2-dev
go: go1.18.1
libseccomp: 2.5.3

$ /cvmfs/grid.cern.ch/vc/containers/runc -v
runc version spec: 1.0.0

If we go that route, obviously, the newer runc needs to be tested against other setup to ensure they didn't break cgroupv1, or any other workload. Suspend/resume on cgroupv2 would need additional work, but cranky has test for cgroup structure already. Since cgroupv2 will never have a matching structure, suspend would skip just fine, same as on cgroupv1 systems without the right cgroup structure.
23) Message boards : Theory Application : Theory native fails with \"mountpoint for cgroup not found\" (Message 47617)
Posted 26 Dec 2022 by wujj123456
Most distros use cgroup v2 and this should not have been taken out of beta with only v1 support. I have nearly 100 failed tasks now just from this application.

This isn't fair honestly. The wider adoption of cgroupv2 happened far after this application was released and it still works in vbox. Ideally, cgroupv2 should have been supported before mainstream distros start to switch over. At this point, I just hope we can get some quick hacks in if the cgroup part is not crucial for the application itself. Even when my system was on cgroupv1, I never bothered to setup suspend and resume. If the current cgroupv2 failure is just for that, I really hope I could just bypass that part.

Just to be sure, from what I can find, none of the LHC@Home application source code is open, right?
24) Message boards : Theory Application : Theory native fails with \"mountpoint for cgroup not found\" (Message 47598)
Posted 21 Dec 2022 by wujj123456
... for newer kernels, cgroup has changed from V1 to V2.
This ends with "\"mountpoint for cgroup not found\" for Theory native ...

This affects all Linux systems using cgroups v2.
Theory's suspend/resume does only support cgroups v1 (freezer).
ATM there's no solution available as it would mean "somebody" would have to write the code to support cgroups v2.

... while Atlas native runs o.k.

Unlike Theory ATLAS (native)
- uses Singularity instead of Runc
- does not support suspend/resume.

Sorry for digging out the old thread. I wonder if I am willing to forgo suspend/resume, could I make native Theory native work under cgroups v2?

I suppose that means I could lose work or even end up with errors occasionally, but I never suspend/pause work on my server and I've configured task switch time to effectively never switch. So not able to suspend and resume doesn't seem to worth the immediate failure I am getting. Example WU: https://lhcathome.cern.ch/lhcathome/result.php?resultid=374269026
25) Message boards : ATLAS application : Native Atlas Guide (Message 47594)
Posted 21 Dec 2022 by wujj123456
Found the guide here : https://apptainer.org/docs/admin/main/installation.html

WU is running a lot longer than it did before so I think I am good now, oddly the CPU and memory usage is minimum, is that normal?

Looks like there are two versions, apptainer and apptainer-suid. Curious which one did you install?
26) Message boards : ATLAS application : Question for the comment in Ubuntu's boinc-client.service unit file about Atlas (Message 47587)
Posted 12 Dec 2022 by wujj123456
Well, I have some answer now. Vbox doesn't work with these options on, even for Theory. The WUs error out right away not able to manage VM, just like when ProtectSystem is set to strict (default on Ubuntu 22.04).

So regardless whether these options are specific to native ATLAS or not, I am not going to enable them. 🤣
27) Message boards : ATLAS application : Question for the comment in Ubuntu's boinc-client.service unit file about Atlas (Message 47586)
Posted 10 Dec 2022 by wujj123456
I just realized the boinc-client.service unit file shipped with Ubuntu 22.04 contained the following comments specific to Atlas. I am not aware of other Atlas applications among BOINC projects, so I assume this is referring to LHC's ATLAS.

ReadWritePaths=-/var/lib/boinc -/etc/boinc-client
ExecStop=/usr/bin/boinccmd --quit
ExecReload=/usr/bin/boinccmd --read_cc_config
ExecStopPost=/bin/rm -f lockfile
# The following options prevent setuid root as they imply NoNewPrivileges=true
# Since Atlas requires setuid root, they break Atlas
# In order to improve security, if you're not using Atlas,
# Add these options to the [Service] section of an override file using
# sudo systemctl edit boinc-client.service
#RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX
#PrivateTmp=true  #Block X11 idle detection

Based on my rudimentary understanding of these options, I have a feeling they only apply for native ATLAS. If I only run the vbox version, can I enable these options safely?
28) Message boards : Number crunching : Did cvmfs download ~150GB of data two days ago? (Message 46567)
Posted 31 Mar 2022 by wujj123456
Did you check whether your squid correctly rejects requests initiated from outside your LAN?
I suspect you intercept all HTTP traffic from inside your LAN directly at the router and force it through squid, right?
This means at least traffic to destination port 80.
Are you aware that some CVMFS/Frontier servers use ports 8000 or 8080.
It would be worth to also route them through squid.

Yes, the squid proxy only listens on internal interfaces and port 80. I can put a monitoring rule to check how much traffic there is on 8000 and 8080, but at least from what I see, I don't think there are major traffic not captured by the current setup for vbox WUs.

You may try out the tuning options from my HowTo and reduce this to 256 MB.
This would leave more RAM for other use on the squid box, e.g. for disk cache.
4-10 GB is suggested to be the CVMFS disk cache size.

Good to know. I intend to capture system update and Steam updates too, which is why it's large. Though they are rare enough so most of time LHC is just having a great hit rate. :-)

If you run Theory vbox each VM will set up it's own CVMFS cache (meanwhile old and degraded).
Hence, each task will send out lots of update requests.
They get all lost when the VM shuts down.
Your squid should cover most of them but its more efficient to run Theory native and keep the data in the local CVMFS cache on the crunching box.

That's what I thought, and also native consumes much less memory. However, if such big downloads happen often enough, it would change the balance. Thus my question trying to understand what happened and how likely or often could it happen again.
29) Message boards : Number crunching : Did cvmfs download ~150GB of data two days ago? (Message 46564)
Posted 31 Mar 2022 by wujj123456
Large downloads happen from time to time although 150 GB within 1 h is very unusual.
CVMFS acts like a cache and tries to use as much data as possible from it's local store.

This reminded me of a few interesting details. The server only had 70-80G available space left and it was not filled up AFAIK. There is no way it could have stored 150GB of data for sure. Meanwhile, the Squid cache I configured is 32GB on disk but 99%+ of hit bytes are served from the 4GB memory cache. It doesn't seem that I even need more than 4GB of data from cvmfs, assuming the vbox theory workload is similar to native other than setup.

I'm pretty curious what this download is actually doing. It kinda feels like a bug TBH...
30) Message boards : Number crunching : Did cvmfs download ~150GB of data two days ago? (Message 46563)
Posted 31 Mar 2022 by wujj123456
Thanks for the reply.

For the data cap, I mostly need to understand how much data I allocate for BOINC. Regarding ATLAS, it was running on the other Windows machine and I have set concurrency limit. Its usage is indeed high but very predictable and I've set aside enough for that. The server I mentioned here is S8026 in my list of computers, which only runs native Theory. Usually it has pretty low usage, but this download caught me off-guard a bit.

I actually have Squid setup at my router directly and the hit rate is superb like 99%+ in terms of bytes for my Windows machine running both ATLAS and Theory in vbox. I didn't observe similar excessive download during the same period from the vbox WUs. Does that mean I might be better off forgoing the native installation but fully rely on Squid caching if I want predictable bandwidth usage?
31) Message boards : Number crunching : Did cvmfs download ~150GB of data two days ago? (Message 46561)
Posted 31 Mar 2022 by wujj123456
I setup native apps for theory application, and thus installed cvmfs. I noticed just now that from 2022-03-29 01:16 GMT-7 to 2022-03-29 02:46 GMT-7 (accurate to a minute or two), my server that runs LHC downloaded at full speed for more than an hour, totaling around 150GB of data.

From logging on my router, I can see the source of traffic was all from 2606:4700:3033::6815:48a2, which is a cloudflare address. Then I retrieved syslog for my system and cvmfs related logs stood out: https://pastebin.com/rTVx9r3C. s1asgc-cvmfs.openhtc.io resolves to that exact address: https://pastebin.com/YfiR9SKB

Unfortunately I have data cap from ISP so I need to be a bit more careful about such incidents.

I've been running native theory on the same server for a year or two now and this is the first time I noticed such thing happening. I haven't touched its setup for quite a while, so I am fairly confident nothing should have changed on my end.

Was this some one-time big update? A bug? Or is it expected from time to time?
32) Message boards : ATLAS application : VM did not power off when requested (Message 43712)
Posted 25 Nov 2020 by wujj123456
Do you have the same problem of disconnection with the Virtualbox 6.1.12 from the Boinc-Homepage instead of the 6.1.16?

I haven't tried. Once the current pending work finishes I can switch to 6.1.12 and check if it helps.
33) Message boards : ATLAS application : VM did not power off when requested (Message 43699)
Posted 25 Nov 2020 by wujj123456
Nvm, that parsing is just for that line, so it's not wrong. It's not clear how it's not finding all the state transition logs then. It should have parsed line-by-line and return last state... :-(
34) Message boards : ATLAS application : VM did not power off when requested (Message 43698)
Posted 25 Nov 2020 by wujj123456
Thanks. I grabbed output of https://lhcathome.cern.ch/lhcathome/result.php?resultid=289380496. VBox.log output: https://pastebin.com/YZyasHCf

The power off actually executed successfully right away, but the task output still came after 5 minutes saying the VM failed to power off. The task log also had first state change:
2020-11-24 14:49:26 (13272): VM state change detected. (old = 'PoweredOff', new = 'Running')

Clearly the poll on https://github.com/BOINC/boinc/blob/master/samples/vboxwrapper/vbox_vboxmanage.cpp#L1253 failed to see the state change to "poweredoff", and it doesn't log state change. From the log "VM did not power off when requested", it's clear "online" was still set to true after 5 minutes.

I think the bug is in https://github.com/BOINC/boinc/blob/master/samples/vboxwrapper/vbox_common.cpp#L450, read_vm_log function itself.

There is no more "Guest Log" in VBox.log after "02:32:35.088629 VMMDev: Guest Log: *** Success! Shutting down the machine. ***". Then if we look at the body of while loop, the first line.find(console) would have found Console: Machine state changed to 'Stopping', which still sets online to true. Then second line_pos = line.find("Guest Log:") would seek to the end of file, saving the cursor and miss out all state changes in between.

I could be wrong, but if not, how is this working in Linux? Is this wrapper code here only specific to windows or the output of VBox.log slightly different?
35) Message boards : ATLAS application : VM did not power off when requested (Message 43696)
Posted 24 Nov 2020 by wujj123456
Thanks for the code pointer and explanation. Is the kernel log of VM kept somewhere in output directory? Since this only happens on Windows, it's probably an issue with VirtualBox. I kinda want to verify whether the VM received the shutdown command at least.

Given the wrapper is BOINC code, I guess there is nothing a project can do. I haven't tried other VM projects in Windows yet. Let me see if this can be reproduced for VMs from other projects too. If so, it's probably something could be discussed in the BOINC github.
36) Message boards : ATLAS application : VM did not power off when requested (Message 43694)
Posted 24 Nov 2020 by wujj123456
Yes, I realized the start and shutdown time is always this, but I am curious what the VM is doing during that 5 minutes. I mean if my computer always takes more than 5 minutes to shut down and I have to pull the plug each time, I probably would want to figure out why.

Despite semi-related discussion in many threads, I am not able to find details for this specific 5 minutes. If it's not doing anything useful and given we are terminating VM at the end anyway, I wonder if we can just shorten the timeout, clean up the VM sooner and we can all get more work done.
37) Message boards : ATLAS application : VM did not power off when requested (Message 43692)
Posted 24 Nov 2020 by wujj123456
It seems that all my ATLAS tasks waste 5 minutes at the end waiting for VM to power off. For example: https://lhcathome.cern.ch/lhcathome/result.php?resultid=289365725

2020-11-23 15:07:03 (16632): VM Completion File Detected.
2020-11-23 15:07:03 (16632): Powering off VM.
2020-11-23 15:12:05 (16632): VM did not power off when requested.
2020-11-23 15:12:05 (16632): VM was successfully terminated.
2020-11-23 15:12:05 (16632): Deregistering VM. (boinc_683247da3d5f3b86, slot#32)

I've checked a dozen results so far and they all have same message at the end. How do I debug why VM isn't powering off when requested?

Given this doesn't seem to affect results at all, why can't we just terminate the VM directly?

PS: I am on Windows 10 64-bit Version 20H2 (OS Build 19042.630). Virtualbox version is 6.1.16 r140961. BOINC version 7.16.11.
38) Message boards : Sixtrack Application : please, remove non-optimized application SixTrack for 32 bit systems (Message 42642)
Posted 28 May 2020 by wujj123456
I would say just remove non-optimized apps altogether... On my Ryzen 3 that's perfectly capable of doing avx, I still get lots of non-optimized WUs, which takes 50-100% longer to finish for same credit. (I am using credit/hr as an approximation as efficiency since it's same app. Feel free to correct the assumption if that's invalid.) I really doubt there are many system not capable of doing sse2 these days and most should be able to do avx too. It's also interesting that all apps are at least sse2 for Linux and apparently that's not a concern.

I wonder if I could use app_info.xml and force map non-optimized app to the avx application? Would it generate different results failing validation? Have anyone tried that already?
39) Message boards : Theory Application : (Native) Theory - Sherpa looooooong runners (Message 41422)
Posted 29 Jan 2020 by wujj123456
Finally finished: https://lhcathome.cern.ch/lhcathome/result.php?resultid=259641514

===> [runRivet] Mon Jan 20 15:24:51 UTC 2020 [boinc pp jets 8000 800 - sherpa 1.4.1 default 100000 16]

Run time 4 days 12 hours 57 min 5 sec
CPU time 4 days 12 hours 21 min 56 sec

It actually finishes? I have a few of these 1d+ or 2d+ WUs as well at 100% progress. I felt it will never finish...
40) Message boards : ATLAS application : error on Atlas native: 195 (0x000000C3) EXIT_CHILD_FAILED (Message 41410)
Posted 28 Jan 2020 by wujj123456
The BOINC data directory must be mounted inside the container, and with a default installation this is /var/lib/boinc-client/slots. If there are problems mounting /var you could try a different data directory or install BOINC in a different place. For example on my desktop I run boinc-client from my home directory because the root partition is too small.

Thanks for the reply. Looks like it's a bind mount and I should be able to easily reproduce this without wasting WUs. However, it does seem to work locally, assuming seeing the error message means container has been setup properly with remount.

$ sudo su -l boinc -s /bin/bash -c '/cvmfs/atlas.cern.ch/repo/containers/sw/singularity/x86_64-el7/current/bin/singularity exec --pwd /var/lib/boinc-client/slots/32 -B /cvmfs,/var /cvmfs/atlas.cern.ch/repo/containers/images/singularity/x86_64-centos7.img sh ls'
INFO: Convert SIF file to sandbox...
/usr/bin/ls: /usr/bin/ls: cannot execute binary file
INFO: Cleaning up image...

Now i wonder if it's some setup in the default unit file came with Ubuntu 19.10: https://pastebin.com/akEe8cyY. I am not that familiar with systemd unit files, but nothing looks suspicious after searching the man page. Clearly the symlink /var/lib/boinc should have been resolved given all WUs read/write /var/lib/boinc-client/ without a problem. Any ideas where I should look next?

Previous 20 · Next 20

©2024 CERN