Message boards :
CMS Application :
New Version 70.00
Message board moderation
Author | Message |
---|---|
Send message Joined: 20 Jun 14 Posts: 380 Credit: 238,712 RAC: 0 |
This new version of the CMS app now supports the multimode attach feature. This means that the image is no longer copied to the slot directory with each new job. Let us know if there are any issues. |
Send message Joined: 29 Aug 05 Posts: 1061 Credit: 7,737,455 RAC: 443 |
|
Send message Joined: 20 Jun 14 Posts: 380 Credit: 238,712 RAC: 0 |
Please try again |
Send message Joined: 29 Aug 05 Posts: 1061 Credit: 7,737,455 RAC: 443 |
|
Send message Joined: 15 Jun 08 Posts: 2541 Credit: 254,608,838 RAC: 56,545 |
The app version has changed to multicore. By intention? |
Send message Joined: 20 Jun 14 Posts: 380 Credit: 238,712 RAC: 0 |
No. Will fix this asap. v70.10 on it's way. |
Send message Joined: 15 Nov 14 Posts: 602 Credit: 24,371,321 RAC: 0 |
I am wondering if this means that CMS is "operational", or still doing test units. I will try it if they are doing actual work units. |
Send message Joined: 4 Sep 22 Posts: 91 Credit: 16,008,656 RAC: 17,897 |
Just what are we looking for here? |
Send message Joined: 15 Jun 08 Posts: 2541 Credit: 254,608,838 RAC: 56,545 |
Each of my VMs starts with log entries like these: 2022-11-01 12:50:43 (39334): Guest Log: [INFO] CMS application starting. Check log files. 2022-11-01 12:50:43 (39334): Guest Log: [INFO] Requesting an idtoken from LHC@home 2022-11-01 12:50:44 (39334): Guest Log: [INFO] Requesting an idtoken from vLHC@home-dev 2022-11-01 12:51:15 (39334): Guest Log: [INFO] Requesting an idtoken from LHC@home 2022-11-01 12:51:15 (39334): Guest Log: [INFO] Requesting an idtoken from vLHC@home-dev 2022-11-01 12:51:45 (39334): Guest Log: [INFO] Requesting an idtoken from LHC@home 2022-11-01 12:51:46 (39334): Guest Log: [INFO] Requesting an idtoken from vLHC@home-dev 2022-11-01 12:52:17 (39334): Guest Log: [INFO] Requesting an idtoken from LHC@home 2022-11-01 12:52:17 (39334): Guest Log: [INFO] Requesting an idtoken from vLHC@home-dev 2022-11-01 12:52:48 (39334): Guest Log: [INFO] Requesting an idtoken from LHC@home 2022-11-01 12:52:49 (39334): Guest Log: [INFO] Requesting an idtoken from vLHC@home-dev 2022-11-01 12:53:20 (39334): Guest Log: [INFO] Requesting an idtoken from LHC@home 2022-11-01 12:53:20 (39334): Guest Log: [INFO] Requesting an idtoken from vLHC@home-dev 2022-11-01 12:53:52 (39334): Guest Log: [DEBUG] % Total % Received % Xferd Average Speed Time Time Time Current 2022-11-01 12:53:52 (39334): Guest Log: Dload Upload Total Spent Left Speed 2022-11-01 12:53:52 (39334): Guest Log: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 2022-11-01 12:53:52 (39334): Guest Log: 100 221 100 221 0 0 436 0 --:--:-- --:--:-- --:--:-- 437 2022-11-01 12:53:52 (39334): Guest Log: [DEBUG] % Total % Received % Xferd Average Speed Time Time Time Current 2022-11-01 12:53:52 (39334): Guest Log: Dload Upload Total Spent Left Speed 2022-11-01 12:53:52 (39334): Guest Log: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 2022-11-01 12:53:52 (39334): Guest Log: 100 221 100 221 0 0 436 0 --:--:-- --:--:-- --:--:-- 437 2022-11-01 12:53:52 (39334): Guest Log: [ERROR] Could not get an x509 credential Nonetheless the CMS jobs seem to run fine. |
Send message Joined: 15 Jun 08 Posts: 2541 Credit: 254,608,838 RAC: 56,545 |
Got a fresh v70.00 task after a couple of v70.10 tasks. This usually points out that not all server instances have been restarted after the recent app upgrade. |
Send message Joined: 15 Jun 08 Posts: 2541 Credit: 254,608,838 RAC: 56,545 |
CMS never runs test units, not even on the dev server. The scientific payload comes from the same backend queue. |
Send message Joined: 15 Nov 14 Posts: 602 Credit: 24,371,321 RAC: 0 |
CMS never runs test units, not even on the dev server. OK, I had gotten the opposite impression. https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5855&postid=47122#47122 |
Send message Joined: 15 Jun 08 Posts: 2541 Credit: 254,608,838 RAC: 56,545 |
You probably stumbled over this: "We've been running the same test-flow for a couple of years now ...". It means that Ivan has to watch the backend queue and manually creates fresh work when it starts getting dry. The BOINC related queue automatically generates "envelope" tasks. |
Send message Joined: 15 Nov 14 Posts: 602 Credit: 24,371,321 RAC: 0 |
OK, I have been wanting to do it. I am in again. |
Send message Joined: 27 Sep 08 Posts: 850 Credit: 692,713,859 RAC: 95,524 |
Same for me, I keep aborting the 70.00 |
Send message Joined: 29 Aug 05 Posts: 1061 Credit: 7,737,455 RAC: 443 |
|
Send message Joined: 15 Jun 08 Posts: 2541 Credit: 254,608,838 RAC: 56,545 |
Just what are we looking for here? Not sure what you are asking for. - the scientific background? - the task setup from the IT perspective? - anything else? Would be nice if you could post a hint. |
Send message Joined: 15 Jun 08 Posts: 2541 Credit: 254,608,838 RAC: 56,545 |
CMS v70.00 and v70.10 are based on the same vdi file "CMS_2022_09_07_prod.vdi". The kernel boot parameters of that vdi contain an old bug that was introduced long ago. It affects the CVMFS configuration when the VM starts to boot. The server lists passed to the kernel look as follows: cvmfs_server=cvmfs-stratum-one.cern.ch,cvmfs-s1fnal.opensciencegrid.org,cvmfs-s1bnl.opensciencegrid.org,grid-cvmfs-one.desy.de cvmfs_cdn=s1cern-cvmfs.openhtc.io,s1ral-cvmfs.openhtc.io;s1bnl-cvmfs.openhtc.io;s1fnal-cvmfs.openhtc.io;s1unl-cvmfs.openhtc.io Although a ";" (semicolon) must be used as separator plus double quotes to enclose the list later within the /etc/cvmfs configuration files it is not allowed here. Instead, a "," (comma) must be used here. The "cvmfs_server" list looks fine but not the "cvmfs_cdn" list. As a result the "cvmfs_cdn" list configures only 2 servers (s1cern-cvmfs.openhtc.io and s1ral-cvmfs.openhtc.io). All others behind the 1st ";" will be ignored. This affects load-balancing as well as fail-over. Load-balancing now happens only between the backends at cern and ral. Example: A client located near Melbourne will get CVMFS data from the Cloudflare proxy nearby (most likely also Melbourne) but if this proxy has to refresh it's cache it will send requests to Europe although it could get the data from Swinburne (see updated lists below). Fail-over will never switch to the backends at bnl, fnal and unl if cern and ral are both down. @Laurence If you touch this setting you might also want to sync the server lists with the recent lists from the CVMFS master. The result should then look like: cvmfs_server=cvmfs-stratum-one.cern.ch:8000,cernvmfs.gridpp.rl.ac.uk:8000,cvmfs-s1bnl.opensciencegrid.org:8000,cvmfs-s1fnal.opensciencegrid.org:8000,cvmfsrep.grid.sinica.edu.tw:8000,cvmfs-stratum-one.ihep.ac.cn:8000,cvmfs-s1.hpc.swin.edu.au:8000 cvmfs_cdn=s1cern-cvmfs.openhtc.io,s1ral-cvmfs.openhtc.io,s1bnl-cvmfs.openhtc.io,s1fnal-cvmfs.openhtc.io:8080,s1asgc-cvmfs.openhtc.io:8080,s1ihep-cvmfs.openhtc.io:8080,s1swinburne-cvmfs.openhtc.io:8080 |
Send message Joined: 15 Jun 08 Posts: 2541 Credit: 254,608,838 RAC: 56,545 |
The following files inside the vdi are also not up to date. They should be changed according to: https://github.com/cvmfs-contrib/config-repo/tree/master/etc/cvmfs/domain.d [/persistent]/etc/cvmfs/domain.d/cern.ch.conf # These are here so cvmfs will notice them if common.conf sets them CVMFS_USE_CDN="$CVMFS_USE_CDN" CVMFS_HTTP_PROXY="$CVMFS_HTTP_PROXY" CVMFS_FALLBACK_PROXY="$CVMFS_FALLBACK_PROXY" . ../common.conf if [ "$CVMFS_USE_CDN" = "yes" ]; then CVMFS_SERVER_URL="http://s1cern-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1ral-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1bnl-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1fnal-cvmfs.openhtc.io:8080/cvmfs/@fqrn@;http://s1asgc-cvmfs.openhtc.io:8080/cvmfs/@fqrn@;http://s1ihep-cvmfs.openhtc.io:8080/cvmfs/@fqrn@;http://s1swinburne-cvmfs.openhtc.io:8080/cvmfs/@fqrn@" else CVMFS_SERVER_URL="http://cvmfs-stratum-one.cern.ch:8000/cvmfs/@fqrn@;http://cernvmfs.gridpp.rl.ac.uk:8000/cvmfs/@fqrn@;http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/@fqrn@;http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/@fqrn@;http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/@fqrn@;http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/@fqrn@;http://cvmfs-s1.hpc.swin.edu.au:8000/cvmfs/@fqrn@" fi CVMFS_KEYS_DIR=$CVMFS_MOUNT_DIR/$CVMFS_CONFIG_REPOSITORY/etc/cvmfs/keys/cern.ch [/persistent]/etc/cvmfs/domain.d/opensciencegrid.org.conf # These are here so cvmfs will notice them if common.conf sets them CVMFS_USE_CDN="$CVMFS_USE_CDN" CVMFS_HTTP_PROXY="$CVMFS_HTTP_PROXY" CVMFS_FALLBACK_PROXY="$CVMFS_FALLBACK_PROXY" . ../common.conf if [ "$CVMFS_USE_CDN" = "yes" ]; then CVMFS_SERVER_URL="http://s1ral-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1nikhef-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1bnl-cvmfs.openhtc.io/cvmfs/@fqrn@;http://s1fnal-cvmfs.openhtc.io:8080/cvmfs/@fqrn@;http://s1asgc-cvmfs.openhtc.io:8080/cvmfs/@fqrn@;http://s1ihep-cvmfs.openhtc.io:8080/cvmfs/@fqrn@;http://s1swinburne-cvmfs.openhtc.io:8080/cvmfs/@fqrn@" else CVMFS_SERVER_URL="http://cvmfs-egi.gridpp.rl.ac.uk:8000/cvmfs/@fqrn@;http://klei.nikhef.nl:8000/cvmfs/@fqrn@;http://cvmfs-s1bnl.opensciencegrid.org:8000/cvmfs/@fqrn@;http://cvmfs-s1fnal.opensciencegrid.org:8000/cvmfs/@fqrn@;http://cvmfsrep.grid.sinica.edu.tw:8000/cvmfs/@fqrn@;http://cvmfs-stratum-one.ihep.ac.cn:8000/cvmfs/@fqrn@;http://cvmfs-s1.hpc.swin.edu.au:8000/cvmfs/@fqrn@" fi CVMFS_KEYS_DIR=$CVMFS_MOUNT_DIR/$CVMFS_CONFIG_REPOSITORY/etc/cvmfs/keys/opensciencegrid.org |
Send message Joined: 4 Sep 22 Posts: 91 Credit: 16,008,656 RAC: 17,897 |
Just what are we looking for here? What files on my system are new/changed? Are any old ones left behind that can be deleted? |
©2024 CERN