Message boards : ATLAS application : Disk usage limit exceeded
Message board moderation

To post messages, you must log in.

AuthorMessage
[AF>Le_Pommier] Jerome_C2005

Send message
Joined: 12 Jul 11
Posts: 93
Credit: 1,129,876
RAC: 7
Message 37939 - Posted: 5 Feb 2019, 21:46:25 UTC
Last modified: 5 Feb 2019, 21:48:26 UTC

Hi

I have this task that has been running for almost 3 days, and then it crashed because "Disk usage limit exceeded" ? What disk usage ? inside the VM ???

Not to mention it was using 8 GB memory, plus other stuff running my 16 GB iMac was dying... but I hanged on and...

And of course 0 credit.

This is crap.
ID: 37939 · Report as offensive     Reply Quote
[AF>Le_Pommier] Jerome_C2005

Send message
Joined: 12 Jul 11
Posts: 93
Credit: 1,129,876
RAC: 7
Message 37968 - Posted: 10 Feb 2019, 11:03:55 UTC
Last modified: 10 Feb 2019, 11:04:25 UTC

The last sentence meant "intense cruncher frustration to see the task going to the bin".

But you cannot edit your own post on this forum...


Edit : I can edit that one, but not the first one... ?
ID: 37968 · Report as offensive     Reply Quote
Jonathan

Send message
Joined: 25 Sep 17
Posts: 93
Credit: 3,078,808
RAC: 2,719
Message 37969 - Posted: 10 Feb 2019, 15:54:06 UTC - in response to Message 37968.  

Have you had any Virtual Box related tasks complete? I think you may have had too many cores assigned to that task. Your processor has 4 cores, 8 with hyper threading? Try setting the atlas task to use 4 cores or less for a single work unit. You can see the forum post "Checklist Version 3 for Atlas@Home (and other VM-based Projects) on your PC" in the Number Crunching section.
ID: 37969 · Report as offensive     Reply Quote
Erich56

Send message
Joined: 18 Dec 15
Posts: 1686
Credit: 100,452,151
RAC: 103,167
Message 37970 - Posted: 10 Feb 2019, 16:54:04 UTC - in response to Message 37968.  

But you cannot edit your own post on this forum...

Edit : I can edit that one, but not the first one... ?
One can edit the own posting only within limited time. I now don't know exactly how long this time is, but it's rather short (a few hours only, if I am not wrong).
ID: 37970 · Report as offensive     Reply Quote
[AF>Le_Pommier] Jerome_C2005

Send message
Joined: 12 Jul 11
Posts: 93
Credit: 1,129,876
RAC: 7
Message 38004 - Posted: 13 Feb 2019, 18:01:43 UTC - in response to Message 37969.  
Last modified: 13 Feb 2019, 18:03:54 UTC

Have you had any Virtual Box related tasks complete? I think you may have had too many cores assigned to that task. Your processor has 4 cores, 8 with hyper threading? Try setting the atlas task to use 4 cores or less for a single work unit. You can see the forum post "Checklist Version 3 for Atlas@Home (and other VM-based Projects) on your PC" in the Number Crunching section.

Yes Sir ! for many years, I was among the early testers of Test4Theory@Home first VM project ever, on that same good old iMac (late 2009 model), I even doubled my RAM (8-->16) by that time to have more comfort when running those big babies :) (well not only for that, but it was one of the reasons !)

And I already had other Atlas tasks finishing quietly.

Still I limit to 1 WU at a time and 6 cores per task, this setup is fine with other LHC (or LHCdev) applications.

(And I also participate to other VM based boinc projects - but none is as demanding as LHC !)

But AFAIK this was the first task to crash with that specific error code, especially frustrating considering the long calculation duration it already had...


But you cannot edit your own post on this forum...

Edit : I can edit that one, but not the first one... ?
One can edit the own posting only within limited time. I now don't know exactly how long this time is, but it's rather short (a few hours only, if I am not wrong).

Good to know ! I just did that to add this part of the answer, thanks.
ID: 38004 · Report as offensive     Reply Quote
[AF>Le_Pommier] Jerome_C2005

Send message
Joined: 12 Jul 11
Posts: 93
Credit: 1,129,876
RAC: 7
Message 38242 - Posted: 13 Mar 2019, 13:10:05 UTC

Again the same error...

https://lhcathome.cern.ch/lhcathome/result.php?resultid=218809592
ID: 38242 · Report as offensive     Reply Quote
Crystal Pellet
Volunteer moderator
Volunteer tester

Send message
Joined: 14 Jan 10
Posts: 1268
Credit: 8,421,637
RAC: 1,939
Message 39655 - Posted: 19 Aug 2019, 17:05:30 UTC
Last modified: 19 Aug 2019, 20:57:08 UTC

LHC@home 19 Aug 17:43:45 CEST Aborting task GsBMDmx7vJvnsSi4apGgGQJmABFKDmABFKDmkqqXDmABFKDmgtZzhm_0: exceeded disk limit: 9016.27MB > 7629.39MB

https://lhcathome.cern.ch/lhcathome/result.php?resultid=243052184

Edit: A VBox ATLAS job uses > 5.6GB diskspace in 1 slot when a job is suspended and saved.
But, when using the snapshot mechanism there is a short period when the newest snapshot is created and the older one not yet deleted, causing exceeding the above mentioned 7.5GB disk limit (disk_bound 8000000000 bytes).
ID: 39655 · Report as offensive     Reply Quote
Profile Magic Quantum Mechanic
Avatar

Send message
Joined: 24 Oct 04
Posts: 1114
Credit: 49,503,363
RAC: 3,820
Message 49283 - Posted: 27 Jan 2024, 23:56:20 UTC

I just had one do this and just for the heck of it did a google and saw this but mine was with a Theory so it couldn't be a core situation or actual disc size or ram on mine so I looked here and see that this one needs to be tossed into the bin

https://lhcathome.cern.ch/lhcathome/workunit.php?wuid=218815358
Run time 2 days 10 hours 30 min 36 sec
ID: 49283 · Report as offensive     Reply Quote
Crystal Pellet
Volunteer moderator
Volunteer tester

Send message
Joined: 14 Jan 10
Posts: 1268
Credit: 8,421,637
RAC: 1,939
Message 49289 - Posted: 28 Jan 2024, 13:42:35 UTC - in response to Message 49283.  

I just had one do this and just for the heck of it did a google and saw this but mine was with a Theory so it couldn't be a core situation or actual disc size or ram on mine so I looked here and see that this one needs to be tossed into the bin

https://lhcathome.cern.ch/lhcathome/workunit.php?wuid=218815358
Run time 2 days 10 hours 30 min 36 sec
Peak disk usage 7.48 GB and 7.450580596923828 GB (8.000.000.000 bytes) is allowed for all files of 1 Theory-task in 1 slot directory.

Job info: [boinc pp z1j 8000 - - sherpa 2.2.9 default 100000 36] will not run successful on BOINC
I've also a long running sherpa (atm 28 hours, 44% into the events). Slot size atm 1.364.082.688 bytes
ID: 49289 · Report as offensive     Reply Quote
Profile Magic Quantum Mechanic
Avatar

Send message
Joined: 24 Oct 04
Posts: 1114
Credit: 49,503,363
RAC: 3,820
Message 49297 - Posted: 29 Jan 2024, 10:02:25 UTC - in response to Message 49289.  
Last modified: 29 Jan 2024, 10:04:43 UTC

Yes sherpa can be that way for sure but I do get Valid ones with them once in a while and most of the nice long Theory tend to be a pythia 6 or 8 and many of those Valid after 7 days and I tend to save some of those when I catch them.
I tend to watch and check any of my VB tasks 99.8% of the time. (I should find out which host I have ran the Atlas tasks since they don't save them in our account stats here like at -dev so I can look in the Bonc files I hope since Win 11 can be a pain in the ) I just started a new batch and one is a sherpa so if anything happens I will switch on over to the Theory page)
ID: 49297 · Report as offensive     Reply Quote

Message boards : ATLAS application : Disk usage limit exceeded


©2024 CERN