Questions and Answers :
Preferences :
Which LHC applications are less network-intensive than ATLAS?
Message board moderation
Author | Message |
---|---|
Send message Joined: 5 Feb 23 Posts: 4 Credit: 84,879 RAC: 0 |
I've been running jobs for LHC@home since Feb 5 now and - as far as I can tell - exclusively received "ATLAS simulation" jobs. I have also noticed that the data usage has gone through the roof (more than 32 GB in ca. 12 days for all projects combined), and I'm suspecting that the ATLAS jobs have a good deal to do with that. (It seems that ATLAS won't even run without constant network access.) Are there any guidelines available as to what network data usage one can expect from the various LHC applications? For now I've excluded both ATLAS applications, but it would be good to know, if there are others that are heavy on data usage. |
Send message Joined: 15 Jun 08 Posts: 2520 Credit: 252,204,849 RAC: 133,560 |
A rough estimate per active core running 24/7: 0.5 GB per day download 0.5 GB per day upload This can be expected for a mix of Theory native, ATLAS native, CMS vbox plus a local Squid instance. Higher download values should be expected if - a local Squid is not used - Theory vbox is configured instead of native - ATLAS vbox is configured instead of native Permanent internet access is a must for Theory/ATLAS/CMS. SixTrack is the only app from LHC@home that does not require permanent internet access but it has not always work available. Since your computers are hidden other volunteers can't check the details to give further advice. You may make them visible here: https://lhcathome.cern.ch/lhcathome/prefs.php?subset=project |
Send message Joined: 28 Sep 04 Posts: 722 Credit: 48,387,588 RAC: 29,408 |
All LHC subprojects that use VirtualBox need constant network connection when running. Only sixtrack tasks don't need it, they don't use VirtualBox but are 'normal' Boinc tasks. Unfortunately they aren't available all the time (like right now). See the server status page or this page for the history: https://grafana.kiska.pw/d/boinc/boinc?orgId=1&var-project=lhc@home&from=now-2d&to=now An other option to reduce network traffic would be to install a local proxy (like Squid) to reduce the network traffic. Here are instructions how: https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473#42987 |
Send message Joined: 5 Feb 23 Posts: 4 Credit: 84,879 RAC: 0 |
A rough estimate per active core running 24/7: @computezrmle , thank you so much for your quick and insightful response. If activated the "Should LHC@home show your computers on its website?" option, if you don't mind taking another look. My sole computer is a Ryzen 5800X3D (8C/16T) and it runs at 25% (i.e. 4 CPUs) for about 18 hours/day (though I've trimmed that back a bit to 16 hrs/day due to the network data issue). My GPU is a Radeon 6750XT (with a clock limit of 50% of max to help with my electric bill), but so far that seems to be only used by other projects. And I'm NOT running a local Squid (or other proxy) instance. I have enabled VirtualBox and didn't check the "Run native" flag, and therefore all LHC jobs (as far as I could tell only ATLAS simulations) came as vbox jobs for 4 CPUs. I actually like the idea of vbox jobs, since they provide a good separation between the job and the main desktop OS (Windows 10 Pro 21H2), but perhaps I should request native jobs to save network bandwidth... I'm running several other projects in parallel, though some of them don't really have jobs very often at all (e.g. Rosetta and climateprediction). Originally I ran LHC@home at the standard 100 resource share, but in light of the network data usage I bumped it down to priority 50. I also disabled the ATLAS simulations for the time being. So if there are any more insights you (or anyone else) can provide based on this info and my computer stats, that would be great. Thanks so much in advance. |
Send message Joined: 15 Jun 08 Posts: 2520 Credit: 252,204,849 RAC: 133,560 |
... it runs at 25% (i.e. 4 CPUs) for about 18 hours/day (though I've trimmed that back a bit to 16 hrs/day due to the network data issue) Do you switch the computer off after 16/18 hrs or did you set a limit in BOINC? ATLAS (and other VM tasks) may fail if the break is too long (some hrs). It's not yet the case here but you may be aware. Is there a transfer limit set by your ISP? If not, it makes not much sense to set one on your side. Each ATLAS task downloads a setup file >200-430 MB before the task starts and uploads a result file >120-200 MB when the task has finished. I have enabled VirtualBox and didn't check the "Run native" flag Native apps from LHC@home are Linux only. Since you run Windows that flag must be disabled. If you would enable it you won't get any ATLAS/Theory task. As for "nothing but ATLAS" you may check whether Theory (may be also CMS) are enabled at your prefs page. Then just wait. ... standard 100 resource share, but ... bumped it down to priority 50 ... Those are long term values which only make sense if all projects continuously have work available. Given 3 projects A (100%), B (100%) and C (1%). As long as A and B have no work you will get 100% C. |
Send message Joined: 5 Feb 23 Posts: 4 Credit: 84,879 RAC: 0 |
My computer is running and online 24/7, I just capped the activity period in BOINC Manager. My ISP doesn't have a transfer limit, but they start charging extra, once the data cap is exceeded. So I'd like to keep overall data usage under control. Each ATLAS task downloads a setup file >200-430 MB before the task starts and uploads a result file >120-200 MB when the task has finished. So that's about 360 to 630 MB every 4 hours of effective runtime. That's a lot of data. As for "nothing but ATLAS" you may check whether Theory (may be also CMS) are enabled at your prefs page. Then just wait. I originally had all applications enabled, but only ever got Atlas jobs. Now that I have Atlas disabled, I actually got a Theory job. Are Theory or CMS more "data friendly" (meaning do they use less data transfer per effective computation time)? ... standard 100 resource share, but ... bumped it down to priority 50 ... I understand that, but it seems to be working for the moment. I'm getting work from 4 other projects and the amount of work that LHC is doing is going down. As an additional backstop, I've now set a cap of 4096 MB every 2 days. Under normal circumstances that cap shouldn't be reached, but if it does, then it indicates some issue. |
Send message Joined: 15 Jun 08 Posts: 2520 Credit: 252,204,849 RAC: 133,560 |
Most limiting is this: My ISP doesn't have a transfer limit, but they start charging extra, once the data cap is exceeded. So I'd like to keep overall data usage under control. As for ATLAS: Beside the big file transfers already mentioned (they are done via the BOINC client) the VM does lots of HTTP transfers that bypass BOINC (CVMFS, Frontier). Suspending a task for a couple of hrs or during sensitive phases may cause it to fail. => It's not recommended to run ATLAS in this scenario. As for CMS: It gets lots of CVMFS updates and sends lots of data directly from the VM to a CERN data bridge (~120 MB each) and exchanges status information every few minutes. Suspending a task for a couple of hrs or during sensitive phases may cause it to fail. => It's not recommended to run CMS in this scenario. As for Theory: Like ATLAS/CMS it gets CVMFS updates via HTTP bypassing the BOINC client but requires small initial data and creates small result data. A local Squid can be used to serve most of the CVMFS updates without external connections. That said, SixTrack and Theory (+Squid) would be the recommended subprojects. In this case it would also be save to remove the 16 hrs cap. |
Send message Joined: 5 Feb 23 Posts: 4 Credit: 84,879 RAC: 0 |
That said, SixTrack and Theory (+Squid) would be the recommended subprojects. Thanks so much for this information. I've limited my LHC@home applications to SixTrack and Theory per your recommendation. Additionally I have also installed the Squid proxy according to your instructions elsewhere on this forums (using your recommended configuration). If nothing else, my little "script" to evaluate Squid's access.log file gives me some valuable information about which BOINC projects are the big data-transfer hogs. Thanks again for all the good work you're doing here. |
Send message Joined: 15 Jun 08 Posts: 2520 Credit: 252,204,849 RAC: 133,560 |
Squid's access.log file gives me some valuable information about which BOINC projects are the big data-transfer hogs Just in case you try CMS again. The intermediate results mentioned below will not go through the proxy1, hence will not be visible in the access.log. computezrmle wrote: As for CMS: 1 It would require non-trivial modifications of the OS's network stack to force them through the proxy, just to be able to count them. |
Send message Joined: 19 Feb 08 Posts: 708 Credit: 4,336,250 RAC: 0 |
I am getting Sixtrack tasks on my Windows 11 laptop with Intel i5-1235U CPU. They run in about 20 minutes. Tullio00 |
©2024 CERN