Message boards : Number crunching : Setting up a local squid cache for a home cluster - old comments
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3

AuthorMessage
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2386
Credit: 222,871,475
RAC: 137,237
Message 39101 - Posted: 11 Jun 2019, 6:25:33 UTC

PurpleHat wrote:
You would go to boinc manager to each computer at Options->Other Options-> HTTP Proxy and put in ip and port to host that are running squid.
If squid is not running boinc-manager would tell you that need internet access and if you are running more projects then LHC it is possible to exclude this in same section. Simply add projecturl in [Don´t use proxy for]

An easy to use method and a fallback if the method below doesn't work.
You may be aware that:
- for some projects it's not only the projecturl that has to be listed
- requests to the listed urls don't appear in your logfile for further analysis



If you want to keep all requests in your squid logfile but a project has problems using data from the cache, you may leave [Don´t use proxy for] blank and configure this method in your squid.conf:
# worldcommunitygrid doesn't like data from the local cache
# use the following lines as template if other projects also have problems
acl wcg_nocache dstdomain .worldcommunitygrid.org
always_direct allow wcg_nocache
cache deny wcg_nocache

# project foobar also doesn't like data from the local cache
# acl definitions are just examples 
acl foobar_nocache dstdomain server1.foobar.com
acl foobar_nocache dstdomain .download.foobar.com
always_direct allow foobar_nocache
cache deny foobar_nocache
ID: 39101 · Report as offensive
Darrell

Send message
Joined: 8 Jul 08
Posts: 20
Credit: 25,933,648
RAC: 17,901
Message 39104 - Posted: 12 Jun 2019, 11:39:35 UTC - in response to Message 39101.  

@ computezrmle:
PurpleHat wrote:

You would go to boinc manager to each computer at Options->Other Options-> HTTP Proxy and put in ip and port to host that are running squid.
If squid is not running boinc-manager would tell you that need internet access and if you are running more projects then LHC it is possible to exclude this in same section. Simply add projecturl in [Don´t use proxy for]


True.

If using Windows and a Client Configuration file, adding the following may be easier if multiple machines use the same configuration:

<cc_config>
<options>
<proxy_info>
<http_server_name>[your server here]</http_server_name>
<http_server_port>[your server port number here]</http_server_port>
<no_proxy>[comma separated list of URLs to not use the proxy]</no_proxy>
<no_autodetect>0</no_autodetect>
</proxy_info>
</options>
</cc_config>
ID: 39104 · Report as offensive
bronco

Send message
Joined: 13 Apr 18
Posts: 443
Credit: 8,438,885
RAC: 0
Message 39121 - Posted: 13 Jun 2019, 16:55:21 UTC

Thanks to everybody who took the time to set this up, especially computezrmie for the example config file and ground work, also Purple Hat, Darrell and others for hints and suggestions. It's running nicely here on Lubuntu serving native ATLAS and Theory to 3 BOINC clients and Firefox.
ID: 39121 · Report as offensive
Profile Laurence
Project administrator
Project developer

Send message
Joined: 20 Jun 14
Posts: 372
Credit: 238,712
RAC: 0
Message 41520 - Posted: 11 Feb 2020, 9:36:46 UTC - in response to Message 39121.  

Has anyone set a squid proxy on Windows on the same machine running the BOINC client?
ID: 41520 · Report as offensive
maeax

Send message
Joined: 2 May 07
Posts: 2071
Credit: 156,083,347
RAC: 105,899
Message 41536 - Posted: 12 Feb 2020, 3:35:17 UTC - in response to Message 41520.  

When in Boinc under Windows HTTP Proxyserver is set to
http://s1cern-cvmfs.openhtc.io/cvmfs/atlas.cern.ch using Port 80.
Is this a possiblity for running Atlas with a Proxy-Server?
Had made a test, Atlas started, RDP started but Task is not running with CPU-using.
ID: 41536 · Report as offensive
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2386
Credit: 222,871,475
RAC: 137,237
Message 41539 - Posted: 12 Feb 2020, 7:22:42 UTC - in response to Message 41536.  

When in Boinc under Windows HTTP Proxyserver is set to
http://s1cern-cvmfs.openhtc.io/cvmfs/atlas.cern.ch using Port 80.
Is this a possiblity for running Atlas with a Proxy-Server?
Had made a test, Atlas started, RDP started but Task is not running with CPU-using.

Don't use your BOINC client's proxy configuration form unless you have your own proxy running inside your LAN!

The way you used it is wrong for different reasons:
1. The form expects a proxy name (or IP) instead of a complete URL.
2. Cloudflare proxies (s1x-cvmfs.openhtc.io) do only serve data from distinct CERN-Stratum-Ones.
ID: 41539 · Report as offensive
Sesson

Send message
Joined: 4 Apr 19
Posts: 31
Credit: 3,541,125
RAC: 14,091
Message 42184 - Posted: 15 Apr 2020, 16:13:12 UTC

I set up squid on Windows on the same machine running BOINC just yesterday and it is working. Here are steps:

    1. Download Squid from https://squid.diladele.com/ and install it. I'm running Rosetta@home and Einstein@home as well as Test4Theory and the squid cache is about 1GB in size, please install it on a drive with sufficient disk space. Let's say it is installed at C:/Squid/ for the sake of simplicity.
    2. Right click on the squid tray to stop squid service and open squid.conf. Copy the example squid.conf into your squid.conf.
    3. Search for http_port setting and change it to
    http_port <LAN IP Address>:3128
    . Here you cannot use "localhost" nor "127.0.0.1", you must use a IP address that is accessible from within the VM. You may configurate your LAN router to assign your computer a fixed LAN IP such as 192.168.2.1 and use that. An IP that is in a VirtualBox LAN might work.
    4. In command prompt under C:/Squid/, run
    bin/squid -z
    to create subdirectories. I created
    C:\Squid\var\cache\squid\0
    directory manually before, but it might be not necessary.
    5. Now configurate BOINC to use a HTTP proxy at <LAN IP Address>:3128. You can now start downloading LHC@home tasks.


I still have to run more tasks in order to fill up my cache, and despite a Squid proxy is set up and serving CVMFS, the Test4Theory VM still connects directly to the Internet to download huge amount of data, so you can see the wallclock time/CPU time on my computer is incredibly low. I have a slow Internet connection, this is still much better than just running Sixtrack for LHC@home.

ID: 42184 · Report as offensive
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2386
Credit: 222,871,475
RAC: 137,237
Message 42186 - Posted: 15 Apr 2020, 18:09:50 UTC - in response to Message 42184.  

Congrats.

... the squid cache is about 1GB in size

This is the size at installation time, right?
What settings do you use now (RAM, disk)?

...VM still connects directly to the Internet to download huge amount of data.

Your logfiles show that the VMs are aware of the proxy, so there should be just a few (if any) HTTP requests going direct.
ID: 42186 · Report as offensive
Sesson

Send message
Joined: 4 Apr 19
Posts: 31
Credit: 3,541,125
RAC: 14,091
Message 42190 - Posted: 16 Apr 2020, 6:03:03 UTC - in response to Message 42186.  

I am using the following conf from the example squid.conf. I decreased the disk cache min size to keep more data in squid. The squid installation is not very big, but it is now 1.21GB including everything cached inside.
# You don't believe this is enough?
# For sure, it is!
cache_mem 192 MB
maximum_object_size_in_memory 24 KB
memory_replacement_policy heap GDSF


# Keep it large enough to store vdi files in the cache.
# See extra section 1 how to avoid onetimers eating up your storage.
# min-size=xxx keeps very small files away from your disk
cache_replacement_policy heap LFUDA
maximum_object_size 6144 MB
cache_dir aufs /var/cache/squid/0 32000 16 64 min-size=200

The squid was mostly working fine but some tasks still don't like it. Here is an example from access_squid.log when an uncached task started:
192.168.1.236 3128 - - [16/Apr/2020:13:21:32 +0800] "POST http://lhcathome-upload.cern.ch/lhcathome_cgi/file_upload_handler HTTP/1.1" 200 960 "-" "BOINC client (windows_x86_64 7.16.5)" TCP_MISS:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:13:21:36 +0800] "POST http://lhcathome-upload.cern.ch/lhcathome_cgi/file_upload_handler HTTP/1.1" 200 13158 "-" "BOINC client (windows_x86_64 7.16.5)" TCP_MISS:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:13:21:57 +0800] "GET http://lhcathome-upload.cern.ch/lhcathome/download//ca/2378-1019287-3.run HTTP/1.1" 200 142072 "-" "BOINC client (windows_x86_64 7.16.5)" TCP_MISS:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:13:22:17 +0800] "CONNECT lhcathome.cern.ch:443 HTTP/1.1" 200 67143 "-" "BOINC client (windows_x86_64 7.16.5)" TCP_TUNNEL:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:13:22:31 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/cernvm-prod.cern.ch/.cvmfspublished HTTP/1.1" 200 1439 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:13:23:28 +0800] "GET http://s1asgc-cvmfs.openhtc.io:8080/cvmfs/cernvm-prod.cern.ch/api/v1.0/geo/192.168.1.236/s1asgc-cvmfs.openhtc.io,s1bnl-cvmfs.openhtc.io,s1cern-cvmfs.openhtc.io,s1fnal-cvmfs.openhtc.io,s1ihep-cvmfs.openhtc.io,s1ral-cvmfs.openhtc.io,s1unl-cvmfs.openhtc.io HTTP/1.1" 200 1122 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_MISS:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:13:23:28 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/cernvm-prod.cern.ch/.cvmfspublished HTTP/1.1" 200 1297 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_MEM_HIT:HIER_NONE
192.168.1.236 3128 - - [16/Apr/2020:13:23:31 +0800] "GET http://s1bnl-cvmfs.openhtc.io/cvmfs/alice.cern.ch/api/v1.0/geo/192.168.1.236/s1asgc-cvmfs.openhtc.io,s1bnl-cvmfs.openhtc.io,s1cern-cvmfs.openhtc.io,s1fnal-cvmfs.openhtc.io,s1ihep-cvmfs.openhtc.io,s1ral-cvmfs.openhtc.io,s1unl-cvmfs.openhtc.io HTTP/1.1" 200 1154 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_MISS:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:13:23:32 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/alice.cern.ch/.cvmfspublished HTTP/1.1" 200 1392 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:13:23:34 +0800] "GET http://s1ral-cvmfs.openhtc.io/cvmfs/grid.cern.ch/api/v1.0/geo/192.168.1.236/s1asgc-cvmfs.openhtc.io,s1bnl-cvmfs.openhtc.io,s1cern-cvmfs.openhtc.io,s1fnal-cvmfs.openhtc.io,s1ihep-cvmfs.openhtc.io,s1ral-cvmfs.openhtc.io,s1unl-cvmfs.openhtc.io HTTP/1.1" 0 421 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_MISS_ABORTED:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:13:23:34 +0800] "GET http://s1cern-cvmfs.openhtc.io/cvmfs/sft.cern.ch/api/v1.0/geo/192.168.1.236/s1asgc-cvmfs.openhtc.io,s1bnl-cvmfs.openhtc.io,s1cern-cvmfs.openhtc.io,s1fnal-cvmfs.openhtc.io,s1ihep-cvmfs.openhtc.io,s1ral-cvmfs.openhtc.io,s1unl-cvmfs.openhtc.io HTTP/1.1" 0 422 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_MISS_ABORTED:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:13:24:00 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/cernvm-prod.cern.ch/.cvmfspublished HTTP/1.1" 200 1378 "-" "curl/7.29.0" TCP_REFRESH_MODIFIED:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:13:24:04 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/alice.cern.ch/.cvmfspublished HTTP/1.1" 200 1325 "-" "curl/7.29.0" TCP_REFRESH_MODIFIED:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:13:24:51 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/sft.cern.ch/.cvmfspublished HTTP/1.1" 200 1387 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:13:24:52 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/alice.cern.ch/.cvmfspublished HTTP/1.1" 200 1236 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_MEM_HIT:HIER_NONE
192.168.1.236 3128 - - [16/Apr/2020:13:24:53 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/cernvm-prod.cern.ch/.cvmfspublished HTTP/1.1" 200 1439 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:13:24:53 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/grid.cern.ch/.cvmfspublished HTTP/1.1" 200 1428 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:13:24:58 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/sft.cern.ch/.cvmfspublished HTTP/1.1" 200 1231 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_MEM_HIT:HIER_NONE
192.168.1.236 3128 - - [16/Apr/2020:13:24:59 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/alice.cern.ch/.cvmfspublished HTTP/1.1" 200 1236 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_MEM_HIT:HIER_NONE
192.168.1.236 3128 - - [16/Apr/2020:13:24:59 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/grid.cern.ch/.cvmfspublished HTTP/1.1" 200 1280 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_MEM_HIT:HIER_NONE
192.168.1.236 3128 - - [16/Apr/2020:13:25:03 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/cernvm-prod.cern.ch/.cvmfspublished HTTP/1.1" 200 1292 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_MEM_HIT:HIER_NONE
192.168.1.236 3128 - - [16/Apr/2020:13:26:33 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/cernvm-prod.cern.ch/.cvmfspublished HTTP/1.1" 200 1439 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:13:27:28 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/cernvm-prod.cern.ch/.cvmfspublished HTTP/1.1" 200 1292 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_MEM_HIT:HIER_NONE
192.168.1.236 3128 - - [16/Apr/2020:13:27:33 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/alice.cern.ch/.cvmfspublished HTTP/1.1" 200 1387 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_DIRECT

After that access_squid.log doesn't log any activity until an Einstein@home GPU task finished, but the network is busy, netstat shows VBoxHeadless.exe is connected to Cloudflare, not squid.exe.
ID: 42190 · Report as offensive
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2386
Credit: 222,871,475
RAC: 137,237
Message 42191 - Posted: 16 Apr 2020, 6:54:45 UTC - in response to Message 42190.  

Since the original squid.conf was written for a different version running on linux I suggest to check your configuration using:
squid -k parse

You may check the output for error messages.


cache_dir aufs /var/cache/squid/0 32000 16 64 min-size=7937
cache_dir aufs /var/cache/squid/0 32000 16 64 min-size=200

32000 creates a disk cache that can grow up to 32 GB.
min-size defines a balance between data that goes to RAM only and data that goes to RAM and disk.
It would make sense to keep an eye on the disk's cluster size, 512 byte for older disks, 4k on newer disks.
The value 7937 is based on the fact that most of the small files that are frequently used fit into squid's RAM cache.
Most larger files are larger than 8k and would therefore use more than 2 clusters on a modern disk.
If squid is only used for BOINC min-size=200 would just cause unnecessary disk writes.

After that access_squid.log doesn't log any activity

There's a file "init_data.xml" in any of your /slots/.../shared directories that transfers the job parameters to the VM.
Could you post the proxy related section from one of those files?
ID: 42191 · Report as offensive
Sesson

Send message
Joined: 4 Apr 19
Posts: 31
Credit: 3,541,125
RAC: 14,091
Message 42192 - Posted: 16 Apr 2020, 8:23:43 UTC - in response to Message 42191.  

Another uncached task Theory_2378-1028449-3_0, elapsed time 48:56, CPU time 2:07, currently still downloading (https://lhcathome.cern.ch/lhcathome/result.php?resultid=271634163)
init_data.xml:
<app_init_data>
<major_version>7</major_version>
<minor_version>16</minor_version>
<release>5</release>
<app_version>30005</app_version>
<userid>583569</userid>
<teamid>0</teamid>
<hostid>10591872</hostid>
<app_name>Theory</app_name>
<project_preferences>

<apps_selected>
<app_id>1</app_id>
<app_id>13</app_id>
</apps_selected>
<max_jobs>4</max_jobs>
<max_cpus>2</max_cpus>
</project_preferences>
<user_name>Sesson</user_name>
<project_dir>C:\ProgramData\BOINC/projects/lhcathome.cern.ch_lhcathome</project_dir>
<boinc_dir>C:\ProgramData\BOINC</boinc_dir>
<authenticator>blahblah</authenticator>
<wu_name>Theory_2378-1028449-3</wu_name>
<result_name>Theory_2378-1028449-3_0</result_name>
<comm_obj_name>boinc_0</comm_obj_name>
<slot>1</slot>
<client_pid>4984</client_pid>
<wu_cpu_time>0.000000</wu_cpu_time>
<starting_elapsed_time>0.000000</starting_elapsed_time>
<using_sandbox>0</using_sandbox>
<vm_extensions_disabled>0</vm_extensions_disabled>
<user_total_credit>678733.580860</user_total_credit>
<user_expavg_credit>751.647658</user_expavg_credit>
<host_total_credit>311157.952305</host_total_credit>
<host_expavg_credit>750.732287</host_expavg_credit>
<resource_share_fraction>1.000000</resource_share_fraction>
<checkpoint_period>60.000000</checkpoint_period>
<fraction_done_start>0.000000</fraction_done_start>
<fraction_done_end>1.000000</fraction_done_end>
<gpu_type></gpu_type>
<gpu_device_num>-1</gpu_device_num>
<gpu_opencl_dev_index>-1</gpu_opencl_dev_index>
<gpu_usage>0.000000</gpu_usage>
<ncpus>1.000000</ncpus>
<rsc_fpops_est>3600000000000.000000</rsc_fpops_est>
<rsc_fpops_bound>6000000000000000000.000000</rsc_fpops_bound>
<rsc_memory_bound>700000000.000000</rsc_memory_bound>
<rsc_disk_bound>8000000000.000000</rsc_disk_bound>
<computation_deadline>1587799861.000000</computation_deadline>
<vbox_window>0</vbox_window>
<host_info>blahblah
</host_info>blahblah
<proxy_info>
    <use_http_proxy/>
    <socks_server_name>127.0.0.1</socks_server_name>
    <socks_server_port>9150</socks_server_port>
    <http_server_name>192.168.1.236</http_server_name>
    <http_server_port>3128</http_server_port>
    <socks5_user_name></socks5_user_name>
    <socks5_user_passwd></socks5_user_passwd>
    <socks5_remote_dns>0</socks5_remote_dns>
    <http_user_name></http_user_name>
    <http_user_passwd></http_user_passwd>
    <no_proxy></no_proxy>
    <no_autodetect>0</no_autodetect>
</proxy_info>
<global_preferences>
   <source_project>http://einstein.phys.uwm.edu/</source_project>
   <mod_time>1548296957.000000</mod_time>
   <battery_charge_min_pct>90.000000</battery_charge_min_pct>
   <battery_max_temperature>40.000000</battery_max_temperature>
   <run_on_batteries>0</run_on_batteries>
   <run_if_user_active>1</run_if_user_active>
   <run_gpu_if_user_active>1</run_gpu_if_user_active>
   <suspend_if_no_recent_input>0.000000</suspend_if_no_recent_input>
   <suspend_cpu_usage>0.000000</suspend_cpu_usage>
   <start_hour>0.000000</start_hour>
   <end_hour>0.000000</end_hour>
   <net_start_hour>0.000000</net_start_hour>
   <net_end_hour>0.000000</net_end_hour>
   <leave_apps_in_memory>1</leave_apps_in_memory>
   <confirm_before_connecting>1</confirm_before_connecting>
   <hangup_if_dialed>0</hangup_if_dialed>
   <dont_verify_images>0</dont_verify_images>
   <work_buf_min_days>1.000000</work_buf_min_days>
   <work_buf_additional_days>1.000000</work_buf_additional_days>
   <max_ncpus_pct>100.000000</max_ncpus_pct>
   <cpu_scheduling_period_minutes>60.000000</cpu_scheduling_period_minutes>
   <disk_interval>60.000000</disk_interval>
   <disk_max_used_gb>20.000000</disk_max_used_gb>
   <disk_max_used_pct>100.000000</disk_max_used_pct>
   <disk_min_free_gb>0.000000</disk_min_free_gb>
   <vm_max_used_pct>2.000000</vm_max_used_pct>
   <ram_max_used_busy_pct>70.000000</ram_max_used_busy_pct>
   <ram_max_used_idle_pct>95.000000</ram_max_used_idle_pct>
   <idle_time_to_run>3.000000</idle_time_to_run>
   <max_bytes_sec_up>0.000000</max_bytes_sec_up>
   <max_bytes_sec_down>0.000000</max_bytes_sec_down>
   <cpu_usage_limit>100.000000</cpu_usage_limit>
   <daily_xfer_limit_mb>0.000000</daily_xfer_limit_mb>
   <daily_xfer_period_days>0</daily_xfer_period_days>
   <override_file_present>1</override_file_present>
   <network_wifi_only>1</network_wifi_only>
</global_preferences>
<app_file>vboxwrapper_26198ab7_windows_x86_64.exe</app_file>
<app_file>Theory_2019_11_13a.xml</app_file>
<app_file>Theory_2020_01_15.vdi</app_file>
</app_init_data>


access_squid.log:
192.168.1.236 3128 - - [16/Apr/2020:15:31:06 +0800] "GET http://lhcathome-upload.cern.ch/lhcathome/download//18a/2378-1028449-3.run HTTP/1.1" 200 142055 "-" "BOINC client (windows_x86_64 7.16.5)" TCP_MISS:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:15:31:28 +0800] "CONNECT lhcathome.cern.ch:443 HTTP/1.1" 200 66859 "-" "BOINC client (windows_x86_64 7.16.5)" TCP_TUNNEL:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:15:32:30 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/cernvm-prod.cern.ch/.cvmfspublished HTTP/1.1" 200 1444 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:15:32:32 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/grid.cern.ch/.cvmfspublished HTTP/1.1" 200 1428 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:15:32:32 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/cernvm-prod.cern.ch/.cvmfspublished HTTP/1.1" 200 1288 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_MEM_HIT:HIER_NONE
192.168.1.236 3128 - - [16/Apr/2020:15:32:32 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/grid.cern.ch/.cvmfspublished HTTP/1.1" 200 1280 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_MEM_HIT:HIER_NONE
192.168.1.236 3128 - - [16/Apr/2020:15:32:33 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/sft.cern.ch/.cvmfspublished HTTP/1.1" 200 1382 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:15:32:33 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/sft.cern.ch/.cvmfspublished HTTP/1.1" 200 1227 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_NONE
192.168.1.236 3128 - - [16/Apr/2020:15:32:52 +0800] "GET http://s1asgc-cvmfs.openhtc.io:8080/cvmfs/alice.cern.ch/api/v1.0/geo/192.168.1.236/s1asgc-cvmfs.openhtc.io,s1bnl-cvmfs.openhtc.io,s1cern-cvmfs.openhtc.io,s1fnal-cvmfs.openhtc.io,s1ihep-cvmfs.openhtc.io,s1ral-cvmfs.openhtc.io,s1unl-cvmfs.openhtc.io HTTP/1.1" 200 1107 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_MISS:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:15:32:53 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/alice.cern.ch/.cvmfspublished HTTP/1.1" 200 1397 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:15:32:58 +0800] "GET http://s1cern-cvmfs.openhtc.io/cvmfs/cernvm-prod.cern.ch/api/v1.0/geo/192.168.1.236/s1asgc-cvmfs.openhtc.io,s1bnl-cvmfs.openhtc.io,s1cern-cvmfs.openhtc.io,s1fnal-cvmfs.openhtc.io,s1ihep-cvmfs.openhtc.io,s1ral-cvmfs.openhtc.io,s1unl-cvmfs.openhtc.io HTTP/1.1" 200 1089 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_MISS:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:15:32:58 +0800] "GET http://s1bnl-cvmfs.openhtc.io/cvmfs/sft.cern.ch/api/v1.0/geo/192.168.1.236/s1asgc-cvmfs.openhtc.io,s1bnl-cvmfs.openhtc.io,s1cern-cvmfs.openhtc.io,s1fnal-cvmfs.openhtc.io,s1ihep-cvmfs.openhtc.io,s1ral-cvmfs.openhtc.io,s1unl-cvmfs.openhtc.io HTTP/1.1" 200 1162 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_MISS:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:15:32:58 +0800] "GET http://s1cern-cvmfs.openhtc.io/cvmfs/grid.cern.ch/api/v1.0/geo/192.168.1.236/s1asgc-cvmfs.openhtc.io,s1bnl-cvmfs.openhtc.io,s1cern-cvmfs.openhtc.io,s1fnal-cvmfs.openhtc.io,s1ihep-cvmfs.openhtc.io,s1ral-cvmfs.openhtc.io,s1unl-cvmfs.openhtc.io HTTP/1.1" 200 1075 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_MISS:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:15:33:17 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/alice.cern.ch/.cvmfspublished HTTP/1.1" 200 1321 "-" "curl/7.29.0" TCP_REFRESH_MODIFIED:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:15:36:32 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/cernvm-prod.cern.ch/.cvmfspublished HTTP/1.1" 200 1444 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:15:36:32 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/cernvm-prod.cern.ch/.cvmfspublished HTTP/1.1" 200 1288 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_MEM_HIT:HIER_NONE
192.168.1.236 3128 - - [16/Apr/2020:15:36:33 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/grid.cern.ch/.cvmfspublished HTTP/1.1" 200 1428 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:15:36:33 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/grid.cern.ch/.cvmfspublished HTTP/1.1" 200 1273 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_NONE
192.168.1.236 3128 - - [16/Apr/2020:15:36:35 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/sft.cern.ch/.cvmfspublished HTTP/1.1" 200 1382 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:15:36:35 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/sft.cern.ch/.cvmfspublished HTTP/1.1" 200 1227 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_NONE
192.168.1.236 3128 - - [16/Apr/2020:15:36:54 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/alice.cern.ch/.cvmfspublished HTTP/1.1" 200 1392 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:15:40:32 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/cernvm-prod.cern.ch/.cvmfspublished HTTP/1.1" 200 1444 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:15:40:32 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/cernvm-prod.cern.ch/.cvmfspublished HTTP/1.1" 200 1289 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_NONE
192.168.1.236 3128 - - [16/Apr/2020:15:40:34 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/grid.cern.ch/.cvmfspublished HTTP/1.1" 200 1428 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:15:40:34 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/grid.cern.ch/.cvmfspublished HTTP/1.1" 200 1273 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_NONE
192.168.1.236 3128 - - [16/Apr/2020:15:40:36 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/sft.cern.ch/.cvmfspublished HTTP/1.1" 200 1382 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:15:40:36 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/sft.cern.ch/.cvmfspublished HTTP/1.1" 200 1227 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_NONE
192.168.1.236 3128 - - [16/Apr/2020:15:40:54 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/alice.cern.ch/.cvmfspublished HTTP/1.1" 200 1392 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:15:42:44 +0800] "CONNECT lhcathome.cern.ch:443 HTTP/1.1" 200 30633 "-" "BOINC client (windows_x86_64 7.16.5)" TCP_TUNNEL:HIER_DIRECT
192.168.1.236 3128 - - [16/Apr/2020:15:44:38 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/cernvm-prod.cern.ch/.cvmfspublished HTTP/1.1" 0 254 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TAG_NONE_ABORTED:HIER_NONE
192.168.1.236 3128 - - [16/Apr/2020:15:44:38 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/cernvm-prod.cern.ch/.cvmfspublished HTTP/1.1" 0 254 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TAG_NONE_ABORTED:HIER_NONE
192.168.1.236 3128 - - [16/Apr/2020:15:44:39 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/grid.cern.ch/.cvmfspublished HTTP/1.1" 200 1278 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_NONE
192.168.1.236 3128 - - [16/Apr/2020:15:44:39 +0800] "GET http://s1ihep-cvmfs.openhtc.io/cvmfs/grid.cern.ch/.cvmfspublished HTTP/1.1" 200 1433 "-" "cvmfs Fuse 2.5.2 11ac1fe9-e05a-44b5-b29e-ca3d2370db11" TCP_REFRESH_MODIFIED:HIER_DIRECT

(the rest are all *.cvmfspublished)

squid -k parse
2020/04/16 16:13:17| Startup: Initializing Authentication Schemes ...
2020/04/16 16:13:17| Startup: Initialized Authentication Scheme 'basic'
2020/04/16 16:13:17| Startup: Initialized Authentication Scheme 'digest'
2020/04/16 16:13:17| Startup: Initialized Authentication Scheme 'negotiate'
2020/04/16 16:13:17| Startup: Initialized Authentication Scheme 'ntlm'
2020/04/16 16:13:17| Startup: Initialized Authentication.
2020/04/16 16:13:17| Processing Configuration File: /etc/squid/squid.conf (depth
 0)
2020/04/16 16:13:17| Processing: acl localnet src 10.0.0.0/8    # RFC1918 possib
le internal network
2020/04/16 16:13:17| Processing: acl localnet src 172.16.0.0/12 # RFC1918 possib
le internal network
2020/04/16 16:13:17| Processing: acl localnet src 192.168.0.0/16        # RFC191
8 possible internal network
2020/04/16 16:13:17| Processing: acl localnet src fc00::/7       # RFC 4193 loca
l private network range
2020/04/16 16:13:17| Processing: acl localnet src fe80::/10      # RFC 4291 link
-local (directly plugged) machines
2020/04/16 16:13:17| Processing: acl SSL_ports port 443
2020/04/16 16:13:17| Processing: acl Safe_ports port 80         # http
2020/04/16 16:13:17| Processing: acl Safe_ports port 21         # ftp
2020/04/16 16:13:17| Processing: acl Safe_ports port 443                # https
2020/04/16 16:13:17| Processing: acl Safe_ports port 70         # gopher
2020/04/16 16:13:17| Processing: acl Safe_ports port 210                # wais
2020/04/16 16:13:17| Processing: acl Safe_ports port 1025-65535 # unregistered p
orts
2020/04/16 16:13:17| Processing: acl Safe_ports port 280                # http-m
gmt
2020/04/16 16:13:17| Processing: acl Safe_ports port 488                # gss-ht
tp
2020/04/16 16:13:17| Processing: acl Safe_ports port 591                # filema
ker
2020/04/16 16:13:17| Processing: acl Safe_ports port 777                # multil
ing http
2020/04/16 16:13:17| Processing: acl CONNECT method CONNECT
2020/04/16 16:13:17| Processing: http_access allow localhost manager
2020/04/16 16:13:17| Processing: http_access deny manager
2020/04/16 16:13:17| Processing: http_access deny !Safe_ports
2020/04/16 16:13:17| Processing: http_access deny CONNECT !SSL_ports
2020/04/16 16:13:17| Processing: http_access deny to_localhost
2020/04/16 16:13:17| Processing: http_access allow localnet
2020/04/16 16:13:17| Processing: http_access allow localhost
2020/04/16 16:13:17| Processing: http_access deny all
2020/04/16 16:13:17| Processing: http_port 192.168.1.236:3128
2020/04/16 16:13:17| Processing: coredump_dir /var/cache/squid
2020/04/16 16:13:17| Processing: refresh_pattern ^ftp:          1440    20%10080

2020/04/16 16:13:17| Processing: refresh_pattern ^gopher:       1440    0%1440
2020/04/16 16:13:17| Processing: refresh_pattern -i (/cgi-bin/|\?) 0    0%0
2020/04/16 16:13:17| Processing: refresh_pattern .              0       20%4320
2020/04/16 16:13:17| Processing: dns_nameservers 8.8.8.8 208.67.222.222
2020/04/16 16:13:17| Processing: max_filedescriptors 3200
2020/04/16 16:13:17| Processing: acl wcg_nocache dstdomain .worldcommunitygrid.o
rg
2020/04/16 16:13:17| Processing: always_direct allow wcg_nocache
2020/04/16 16:13:17| Processing: cache deny wcg_nocache
2020/04/16 16:13:17| Processing: acl einstein_nocache dstdomain einstein2.aei.un
i-hannover.de
2020/04/16 16:13:17| Processing: always_direct allow einstein_nocache
2020/04/16 16:13:17| Processing: cache deny einstein_nocache
2020/04/16 16:13:17| Processing: acl cvmfs_geoapi urlpath_regex -i ^/+cvmfs/+[0-
9a-z._~-]+/+api/+[0-9a-z._~-]+/+geo/+[0-9a-z._~-]+/+[0-9a-z.,_~-]+
2020/04/16 16:13:17| Processing: always_direct allow cvmfs_geoapi
2020/04/16 16:13:17| Processing: cache deny cvmfs_geoapi
2020/04/16 16:13:17| Processing: acl boinc_nocache urlpath_regex -i /download[0-
9a-z._~-]*/+[0-9a-z._~-]+/+.+
2020/04/16 16:13:17| Processing: cache deny boinc_nocache
2020/04/16 16:13:17| Processing: acl PragmaNoCache req_header Pragma no-cache
2020/04/16 16:13:17| Processing: cache deny PragmaNoCache
2020/04/16 16:13:17| Processing: client_dst_passthru off
2020/04/16 16:13:17| Processing: cache_mem 192 MB
2020/04/16 16:13:17| Processing: maximum_object_size_in_memory 24 KB
2020/04/16 16:13:17| Processing: memory_replacement_policy heap GDSF
2020/04/16 16:13:17| Processing: cache_replacement_policy heap LFUDA
2020/04/16 16:13:17| Processing: maximum_object_size 6144 MB
2020/04/16 16:13:17| Processing: cache_dir aufs /var/cache/squid/0 32000 16 64 m
in-size=200
2020/04/16 16:13:17| Processing: logformat my_awstats %>A %lp %ui %un [%tl] "%rm
 %ru HTTP/%rv" %>Hs %st "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh
2020/04/16 16:13:17| Processing: access_log stdio:/var/log/squid/access_squid.lo
g logformat=my_awstats
2020/04/16 16:13:17| Processing: strip_query_terms off
2020/04/16 16:13:17| Processing: netdb_filename none
2020/04/16 16:13:17| ERROR: 'netdb_filename' requires --enable-icmp
2020/04/16 16:13:17| Processing: coredump_dir none
2020/04/16 16:13:17| Processing: ftp_user anonymous@
2020/04/16 16:13:17| Processing: max_stale 37 days
2020/04/16 16:13:17| Processing: refresh_pattern .      0       0%      0
2020/04/16 16:13:17| Processing: store_avg_object_size 1 MB
2020/04/16 16:13:17| Processing: collapsed_forwarding on
2020/04/16 16:13:17| Processing: client_persistent_connections on
2020/04/16 16:13:17| Processing: server_persistent_connections on
2020/04/16 16:13:17| Processing: digest_generation off
2020/04/16 16:13:17| ERROR: 'digest_generation' requires --enable-cache-digests
2020/04/16 16:13:17| Processing: log_icp_queries off
2020/04/16 16:13:17| Processing: error_default_language en
2020/04/16 16:13:17| Processing: dns_defnames on
2020/04/16 16:13:17| Processing: dns_v4_first on
2020/04/16 16:13:17| Processing: forwarded_for transparent
2020/04/16 16:13:17| Initializing https proxy context


user contribution http://mcplots-dev.cern.ch/production.php?view=user&system=3&userid=583569
ID: 42192 · Report as offensive
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2386
Credit: 222,871,475
RAC: 137,237
Message 42193 - Posted: 16 Apr 2020, 9:15:50 UTC - in response to Message 42192.  

I don't see an obvious error in your post except that your squid logfile should list more requests.

This is an example from one of your successful tasks:
2020-04-16 15:08:16 (2252): Guest Log: [INFO] Detected local proxy http://192.168.1.236:3128 in init_data.xml
2020-04-16 15:08:16 (2252): Guest Log: [INFO] Testing connection to 192.168.1.236 on port 3128
2020-04-16 15:08:19 (2252): Guest Log: [INFO] Ncat: Version 7.50 ( https://nmap.org/ncat )
2020-04-16 15:08:19 (2252): Guest Log: Ncat: Connected to 192.168.1.236:3128.
2020-04-16 15:08:19 (2252): Guest Log: Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds.
2020-04-16 15:08:19 (2252): Guest Log: [INFO] 0

The VM gets the squid's IP and port and does a basic connection test that succeeds.
The CVMFS inside the VM should now be reconfigured to use the proxy but unfortunately there is no entry telling us if this also succeeds.

I'll ask Laurence to add some more logging.
Since the overall walltime/CPU-time values of your successful tasks don't look too bad you may keep Theory running and check the task details.
ID: 42193 · Report as offensive
Sesson

Send message
Joined: 4 Apr 19
Posts: 31
Credit: 3,541,125
RAC: 14,091
Message 42207 - Posted: 17 Apr 2020, 16:57:49 UTC

While one of the reasons for my participation in LHC@home is its less CPU utilization to keep power in control, I'm tired of waiting and watching these uncached tasks. I should better set up a linux in my VirtualBox and run native tasks there.
ID: 42207 · Report as offensive
CloverField

Send message
Joined: 17 Oct 06
Posts: 74
Credit: 51,492,587
RAC: 22,350
Message 42495 - Posted: 15 May 2020, 14:09:32 UTC
Last modified: 15 May 2020, 14:09:45 UTC

Well this work surprisingly well, set the cache up at 2 am and its already cached 1.3 Gigs of data.
@computezrmle would you mind telling me how you got this output from squid.

Downloads served by the proxy
TCP_MEM_HIT 1,017,914 requests 1.92 GB
TCP_HIT 1,516 requests 4.68 GB
TCP_REFRESH_UNMODIFIED 8,505 requests 63.26 MB

Downloads requested from lhc@home
TCP_MISS 1,363 requests 362.30 MB
TCP_REFRESH_MODIFIED 3,031 requests 11.54 MB

Result uploads to lhc@home
TCP_MISS__UPLOAD 2,037 requests 28.97 GB


Id like to see how much data Ive saved by having the cache running.
ID: 42495 · Report as offensive
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2386
Credit: 222,871,475
RAC: 137,237
Message 42498 - Posted: 15 May 2020, 14:30:57 UTC - in response to Message 42495.  

The raw data can be found in squid's access.log but it requires a couple of scripts and filters to get values like TCP_MISS__UPLOAD.
I run the refined(!) data through a customized awstats.
This results in pages like this (not that much hits of course ;-)):
http://wlcg-squid-monitor.cern.ch/awstats/bin/awstats.pl?month=05&year=2020&output=main&config=atlasfrontier.cern.ch&framename=index
ID: 42498 · Report as offensive
CloverField

Send message
Joined: 17 Oct 06
Posts: 74
Credit: 51,492,587
RAC: 22,350
Message 42500 - Posted: 15 May 2020, 15:12:33 UTC - in response to Message 42498.  

The raw data can be found in squid's access.log but it requires a couple of scripts and filters to get values like TCP_MISS__UPLOAD.
I run the refined(!) data through a customized awstats.
This results in pages like this (not that much hits of course ;-)):
http://wlcg-squid-monitor.cern.ch/awstats/bin/awstats.pl?month=05&year=2020&output=main&config=atlasfrontier.cern.ch&framename=index


Looks like I have a fun weekend project then.
Thanks for the info!
ID: 42500 · Report as offensive
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2386
Credit: 222,871,475
RAC: 137,237
Message 42989 - Posted: 9 Jul 2020, 14:26:57 UTC

New comments regarding a Squid configuration should be posted here:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474
ID: 42989 · Report as offensive
Previous · 1 · 2 · 3

Message boards : Number crunching : Setting up a local squid cache for a home cluster - old comments


©2024 CERN