Message boards :
LHCb Application :
Influence of ram memory allocation for small hosts
Message board moderation
Author | Message |
---|---|
Send message Joined: 24 Jul 16 Posts: 88 Credit: 239,917 RAC: 0 |
I tried to experiment LHCb wus with different amount of memory ram allocated. I looked at the duration of the internaljobs and at the percentage of ram memory used inside the VM. After some tests on these single core wus, here are the results : 1250 Mbytes : Average time job = 3h12 (3h01-2h55-3h18-3h36) Memory used inside VM = 70% 1500 Mbytes : Average time job = 3h45 (3h53-3h50-3h33-.h..) Memory used inside VM = 61% 1750 Mbytes : Average time job = 3h34 (3h03-3h13-4h27-.h..) Memory used inside VM = 38% 2000 Mbytes : Average time job = 3h06 (3h33-3h40-3h15-1h56) Memory used inside VM = 47% 2250 Mbytes : Average time job = 3h23 (4h12-2h53-3h30-2h58) Memory used inside VM = 47% First discovery : In all the cases , the behavior of the computer was good and responsive. Even with less ram , the wus ended correctly. The requirement of 2048 MBytes ram memory set , by default , doesn't seem appropriate. Second discovery : The job duration isn't correlated with the amount of ram allocated. The influence of memory ram allocated is not so significant. (I have a HDD , not a SSD , so the discrepancy should be higher, accesses to the disk are more numerous with less ram and consequently more swap) Some questions: Does the project provide jobs , according to the host's ressources ? (that is to say the level of difficulty of jobs is sorted before being sent to a particular host) Does the requirement of 2048 Mbytes , still a necessity ,looking at the results ? (No error occured during the tests) Was 2048 MBytes an originally bound , coming from the XP Operating System ? What is the optimal target for the needs of the project ? (if default set modified ,let inform us the best amount wished to adjust it with app_config, if our host has sufficient ram, of course) Have stats been made on a greater sample of jobs recently , since the new status "in production" of LHCb project ? (improvements in handling wus could have modified the basis of the requirement needed) Is it possible to add a progress bar inside the vm to see the percentage of job done? It's rather uncomfortable to follow it when you don't crunch 7/24 , we need to merge information from windows manager and the console ALT+F4 to shutdown computer at the good moment. We have to wait for information : "job finished in slot 1" to be sure the work is really recorded on the server. And with windows manager , we notice the beginning of the upload which announces that in a average of 18 min (with my bandwith), jobs will be recorded truly on server. |
Send message Joined: 24 Jul 16 Posts: 88 Credit: 239,917 RAC: 0 |
ERRATUM : 1750 Mbytes : Average time job = 3h34 (3h03-3h13-4h27-.h..) Memory used inside VM = 57% and not 38% Sorry for the mistake... |
Send message Joined: 20 Jun 14 Posts: 380 Credit: 238,712 RAC: 0 |
The specification for LHC HEP applications is 2GB per core. This value is used to build the internal computing infrastructure. The VMs also have 1GB of swap configured. For the theory application, others have done similar tests and we arrived a sensible value for the memory. We have to be careful that the observations are true for all the jobs that LHCb may wish to run. The 2250Mb was originally requested by LHCb.
Yes, as long as the RAM is sufficient, extra RAM will not affect the job execution time.
The internal jobs or the BOINC task?
No, 2GB per core is what in general the LHC HEP jobs require.
LHCb should have the accounting results for the jobs.
Another question for LHCb.
This information is not really designed for the casual cruncher. |
Send message Joined: 26 Apr 17 Posts: 7 Credit: 22,463 RAC: 0 |
Some questions: What do you mean exactly? LHCb is sending Monte Carlo simulation jobs to Boinc without special selection with respect to other computing nodes. In some sense we 'sort' the load according to the computational power of the host by tuning the number of events to be generated (without exceeding the 'time-slot') Does the requirement of 2048 Mbytes , still a necessity ,looking at the results ? The amount of required memory may vary from simulation to simulation (depending on the type of events to be generated). The 2Gb requirement (+ swap) should accomodate all kind of loads. Is it possible to add a progress bar inside the vm to see the percentage of job done? Unfortunately, it is not straight-forward to implement this in our application. We are looking at this for a future release |
Send message Joined: 24 Jul 16 Posts: 88 Credit: 239,917 RAC: 0 |
What do you mean exactly? In fact , looking at the results for each test with a different value of ram,i didn't notice any change in the behavior of the host (except for 2250MB RAM) and in the duration of the internal job.There is a great variability or / and volatibility in the duration. So i wondered why ? In my mind , less ram means more swap ,so a higher duration for the job inside the VM. But as Laurence said : Yes, as long as the RAM is sufficient, extra RAM will not affect the job execution time. I understand better,even if some swap was used (above all at the end of the job ,just before the upload of the result, so a very short time). The 2Gb requirement (+ swap) should accomodate all kind of loads. Yes , i understand this is a HEP specification , but do you allow some volunteers with small hosts (lower than 4 GBytes) to use less ram memory to execute LHCb jobs ? (Maybe ratio of failure jobs would increase in a small part, but the number of slots running also.) (2 * 1250 = 2500 MB is possible for a small host) Or do you advise some of us to use a 2-core with 2500 MB in this case (more data share)? And finally how much is the HEP specification for multi-cores work units with LHCb? 5orry , to be so inquisitive...But it's important...to do good science... |
©2024 CERN