Message boards : LHCb Application : Influence of ram memory allocation for small hosts
Message board moderation

To post messages, you must log in.

AuthorMessage
PHILIPPE

Send message
Joined: 24 Jul 16
Posts: 88
Credit: 239,917
RAC: 0
Message 30640 - Posted: 5 Jun 2017, 14:36:34 UTC
Last modified: 5 Jun 2017, 14:37:13 UTC

I tried to experiment LHCb wus with different amount of memory ram allocated.
I looked at the duration of the internaljobs and at the percentage of ram memory used inside the VM.
After some tests on these single core wus, here are the results :

1250 Mbytes :
Average time job = 3h12 (3h01-2h55-3h18-3h36)
Memory used inside VM = 70%
1500 Mbytes :
Average time job = 3h45 (3h53-3h50-3h33-.h..)
Memory used inside VM = 61%
1750 Mbytes :
Average time job = 3h34 (3h03-3h13-4h27-.h..)
Memory used inside VM = 38%
2000 Mbytes :
Average time job = 3h06 (3h33-3h40-3h15-1h56)
Memory used inside VM = 47%
2250 Mbytes :
Average time job = 3h23 (4h12-2h53-3h30-2h58)
Memory used inside VM = 47%

First discovery :
In all the cases , the behavior of the computer was good and responsive.
Even with less ram , the wus ended correctly.
The requirement of 2048 MBytes ram memory set , by default , doesn't seem appropriate.

Second discovery :
The job duration isn't correlated with the amount of ram allocated.
The influence of memory ram allocated is not so significant.
(I have a HDD , not a SSD , so the discrepancy should be higher, accesses to the disk are more numerous with less ram and consequently more swap)

Some questions:
Does the project provide jobs , according to the host's ressources ?
(that is to say the level of difficulty of jobs is sorted before being sent to a particular host)

Does the requirement of 2048 Mbytes , still a necessity ,looking at the results ?
(No error occured during the tests)

Was 2048 MBytes an originally bound , coming from the XP Operating System ?

What is the optimal target for the needs of the project ?
(if default set modified ,let inform us the best amount wished to adjust it with app_config, if our host has sufficient ram, of course)

Have stats been made on a greater sample of jobs recently , since the new status "in production" of LHCb project ?
(improvements in handling wus could have modified the basis of the requirement needed)

Is it possible to add a progress bar inside the vm to see the percentage of job done?
It's rather uncomfortable to follow it when you don't crunch 7/24 , we need to merge information from windows manager and the console ALT+F4 to shutdown computer at the good moment.
We have to wait for information : "job finished in slot 1" to be sure the work is really recorded on the server.
And with windows manager , we notice the beginning of the upload which announces that in a average of 18 min (with my bandwith), jobs will be recorded truly on server.
ID: 30640 · Report as offensive     Reply Quote
PHILIPPE

Send message
Joined: 24 Jul 16
Posts: 88
Credit: 239,917
RAC: 0
Message 30644 - Posted: 5 Jun 2017, 19:32:47 UTC - in response to Message 30640.  

ERRATUM :
1750 Mbytes :
Average time job = 3h34 (3h03-3h13-4h27-.h..)
Memory used inside VM = 57% and not 38%

Sorry for the mistake...
ID: 30644 · Report as offensive     Reply Quote
Profile Laurence
Project administrator
Project developer

Send message
Joined: 20 Jun 14
Posts: 373
Credit: 238,712
RAC: 0
Message 30646 - Posted: 6 Jun 2017, 8:51:17 UTC - in response to Message 30640.  
Last modified: 6 Jun 2017, 8:51:50 UTC


First discovery :
In all the cases , the behavior of the computer was good and responsive.
Even with less ram , the wus ended correctly.
The requirement of 2048 MBytes ram memory set , by default , doesn't seem appropriate.


The specification for LHC HEP applications is 2GB per core. This value is used to build the internal computing infrastructure. The VMs also have 1GB of swap configured. For the theory application, others have done similar tests and we arrived a sensible value for the memory. We have to be careful that the observations are true for all the jobs that LHCb may wish to run. The 2250Mb was originally requested by LHCb.



Second discovery :
The job duration isn't correlated with the amount of ram allocated.
The influence of memory ram allocated is not so significant.
(I have a HDD , not a SSD , so the discrepancy should be higher, accesses to the disk are more numerous with less ram and consequently more swap)

Yes, as long as the RAM is sufficient, extra RAM will not affect the job execution time.


Some questions:
Does the project provide jobs , according to the host's resources ?


The internal jobs or the BOINC task?


Does the requirement of 2048 Mbytes , still a necessity ,looking at the results ?


We will have to check with LHCb

[quote]
Was 2048 MBytes an originally bound , coming from the XP Operating System ?

No, 2GB per core is what in general the LHC HEP jobs require.


What is the optimal target for the needs of the project ?
(if default set modified ,let inform us the best amount wished to adjust it with app_config, if our host has sufficient ram, of course)

2GB per core is what is needed but if we can reduce this it would be good.

[quote]
Have stats been made on a greater sample of jobs recently , since the new status "in production" of LHCb project ?
(improvements in handling wus could have modified the basis of the requirement needed)

LHCb should have the accounting results for the jobs.


Is it possible to add a progress bar inside the vm to see the percentage of job done?

Another question for LHCb.


It's rather uncomfortable to follow it when you don't crunch 7/24 , we need to merge information from windows manager and the console ALT+F4 to shutdown computer at the good moment.

This information is not really designed for the casual cruncher.
ID: 30646 · Report as offensive     Reply Quote
Luca Tomassetti

Send message
Joined: 26 Apr 17
Posts: 7
Credit: 22,463
RAC: 0
Message 30662 - Posted: 6 Jun 2017, 13:51:05 UTC - in response to Message 30640.  

Some questions:
Does the project provide jobs , according to the host's ressources ?
(that is to say the level of difficulty of jobs is sorted before being sent to a particular host)


What do you mean exactly?
LHCb is sending Monte Carlo simulation jobs to Boinc without special selection with respect to other computing nodes. In some sense we 'sort' the load according to the computational power of the host by tuning the number of events to be generated (without exceeding the 'time-slot')

Does the requirement of 2048 Mbytes , still a necessity ,looking at the results ?
(No error occured during the tests)

Was 2048 MBytes an originally bound , coming from the XP Operating System ?

What is the optimal target for the needs of the project ?
(if default set modified ,let inform us the best amount wished to adjust it with app_config, if our host has sufficient ram, of course)


The amount of required memory may vary from simulation to simulation (depending on the type of events to be generated). The 2Gb requirement (+ swap) should accomodate all kind of loads.


Is it possible to add a progress bar inside the vm to see the percentage of job done?


Unfortunately, it is not straight-forward to implement this in our application. We are looking at this for a future release
ID: 30662 · Report as offensive     Reply Quote
PHILIPPE

Send message
Joined: 24 Jul 16
Posts: 88
Credit: 239,917
RAC: 0
Message 30676 - Posted: 6 Jun 2017, 20:33:38 UTC - in response to Message 30662.  


Some questions:
Does the project provide jobs , according to the host's resources ?



The internal jobs or the BOINC task?

What do you mean exactly?
LHCb is sending Monte Carlo simulation jobs to Boinc without special selection with respect to other computing nodes. In some sense we 'sort' the load according to the computational power of the host by tuning the number of events to be generated (without exceeding the 'time-slot')

In fact , looking at the results for each test with a different value of ram,i didn't notice any change in the behavior of the host (except for 2250MB RAM) and in the duration of the internal job.There is a great variability or / and volatibility in the duration.
So i wondered why ?
In my mind , less ram means more swap ,so a higher duration for the job inside the VM.

But as Laurence said :
Yes, as long as the RAM is sufficient, extra RAM will not affect the job execution time.

I understand better,even if some swap was used (above all at the end of the job ,just before the upload of the result, so a very short time).

The 2Gb requirement (+ swap) should accomodate all kind of loads.

Yes , i understand this is a HEP specification ,
but do you allow some volunteers with small hosts (lower than 4 GBytes) to use less ram memory to execute LHCb jobs ?

(Maybe ratio of failure jobs would increase in a small part, but the number of slots running also.) (2 * 1250 = 2500 MB is possible for a small host)

Or do you advise some of us to use a 2-core with 2500 MB in this case (more data share)?

And finally how much is the HEP specification for multi-cores work units with LHCb?

5orry , to be so inquisitive...But it's important...to do good science...
ID: 30676 · Report as offensive     Reply Quote

Message boards : LHCb Application : Influence of ram memory allocation for small hosts


©2024 CERN