Message boards :
ATLAS application :
WorkDirSize
Message board moderation
Author | Message |
---|---|
Send message Joined: 18 Dec 15 Posts: 1600 Credit: 78,152,524 RAC: 74,630 ![]() ![]() ![]() |
On one of my PCs (with 8 GB RAM), I have been crunching 1-core ATLAS tasks lately. In the app_config.xml, I have the following: <cmdline>--memory_size_mb 3900</cmdline> assuming that this stands for the max. RAM needed by / allocated to the processing of a task. However, what I found out is that the task uses almost 7 GB RAM. When looking up a finished task in ATLAS PanDA today, I noticed the following line: workDirSize=6979584 which obviously stands for the RAM usage of the task. So, how does all this fit together? |
Send message Joined: 13 Apr 18 Posts: 443 Credit: 8,438,885 RAC: 0 ![]() ![]() |
workDirSize is a contraction of workingDirectorySize. It does not refer to RAM. It refers to the size of the directory (folder) that contains all the files and sub-folders the task generates as it works. |
Send message Joined: 18 Dec 15 Posts: 1600 Credit: 78,152,524 RAC: 74,630 ![]() ![]() ![]() |
workDirSize is a contraction of workingDirectorySizeThis was clear to me. It does not refer to RAM. It refers to the size of the directory (folder) that contains all the files and sub-folders the task generates as it worksStill, the size of this WorkingDirectory is obviously the same as the size of RAM which the processing of a task is needing (in contrast to what is being set in the app_config.xml). So, what I am questioning is what sense it makes to set e.g. 3900MB RAM for a 1-core task, if actually almost 7 GB are taken. |
Send message Joined: 13 Apr 18 Posts: 443 Credit: 8,438,885 RAC: 0 ![]() ![]() |
Still, the size of this WorkingDirectory is obviously the same as the size of RAM which the processing of a task is needing Obviously? How so? |
Send message Joined: 18 Dec 15 Posts: 1600 Credit: 78,152,524 RAC: 74,630 ![]() ![]() ![]() |
Obviously? How so?as I explained in my first posting above: However, what I found out is that the task uses almost 7 GB RAM. |
Send message Joined: 13 Apr 18 Posts: 443 Credit: 8,438,885 RAC: 0 ![]() ![]() |
Obviously? How so?as I explained in my first posting above:However, what I found out is that the task uses almost 7 GB RAM. How did you find out that the task uses almost 7 GB RAM? I don't know where your confusion on this starts so I have no idea how to unravel it for you. I only know that if the max RAM used and the max working dir size are the same then it's only a coincidence. |
Send message Joined: 18 Dec 15 Posts: 1600 Credit: 78,152,524 RAC: 74,630 ![]() ![]() ![]() |
How did you find out that the task uses almost 7 GB RAM?this is easy enough: with tools like MemInfo - I looked what the total RAM usage was BEFORE I got the task started, and then I looked what the total RAM usage was later (several times, while the task was running) - so the difference is what the tasks uses (provided, of course, that no other apps are started meanwhile). After the task got finished and uploaded, the value was the same as it was before the task got started. So, in my eyes, it's very clear how much RAM the task was using. I only know that if the max RAM used and the max working dir size are the same then it's only a coincidence.okay, maybe so. But still the question is: when 3900MB are set in the app_config.xml, how come that the task actually uses almost 7GB ? |
Send message Joined: 13 Apr 18 Posts: 443 Credit: 8,438,885 RAC: 0 ![]() ![]() |
If I'm not mistaken MemInfo its a Windows utility. I use Linux only so I'm not familiar with MemInfo. My hunch is that either MemInfo is mis-reporting or you are mis-interpreting it's output or there is some other craziness happening. Take a look at https://lhcathome.cern.ch/lhcathome/result.php?resultid=206322091, a 1 core ATLAS task completed on one of your hosts. It says: Peak working set size 92.29 MB Peak swap size 129.78 MB Peak disk usage 3,555.95 MB It claims the task used only 92.9 MB RAM which contradicts: 1) your claim that ATLAS VBox tasks use 7GB 2) my assertion that they need 3900MB Now take a look at https://lhcathome.cern.ch/lhcathome/result.php?resultid=206356752, a 1 core ATLAS native task completed on one of my Linux hosts. It says: Peak working set size 1,903.52 MB Which should we believe... 92MB with the additional overhead of VBox involved or 1903MB with no VBox overhead involved? I don't know the answer. Maybe it's all BS. All I know for sure is that in my app_config.xml for ATLAS native I have <cmdline>--memory_size_mb 2100</cmdline> and it works :) |
©2023 CERN