Re: brick multiplexing and memory consumption

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Wed, Jun 21, 2017 at 9:53 AM, Raghavendra Talur <rtalur@xxxxxxxxxx> wrote:


On 21-Jun-2017 9:45 AM, "Jeff Darcy" <jeff@xxxxxxxxxx> wrote:



On Tue, Jun 20, 2017, at 03:38 PM, Raghavendra Talur wrote:
Each process takes 795MB of virtual memory and resident memory is 10MB each.

Wow, that's even better than I thought.  I was seeing about a 3x difference per brick (plus the fixed cost of a brick process) during development.  Your numbers suggest more than 10x.  Almost makes it seem worth the effort.  ;)

:) 


Just to be clear, I am not saying that brick multiplexing isn't working. The aim is to prevent the glusterfsd process from getting OOM killed because 200 bricks when multiplexed consume 20GB of virtual memory.

Yes, the OOM killer is more dangerous with multiplexing.  It likes to take out the process that is the whole machine's reason for existence, which is pretty darn dumb.  Perhaps we should use oom_adj/OOM_DISABLE to make it a bit less dumb?

This is not so easy for container deployment models. 


If it is found that the additional usage of 75MB of virtual memory per every brick attach can't be removed/reduced, then the only solution would be to fix issue 151 [1] by limiting multiplexed bricks.

This is another reason why limiting the number of brick processes is preferable to limiting the number of bricks per process.  When we limit bricks per process and wait until one is "full" before starting another, then that first brick process remains a prime target for the OOM killer.  By "striping" bricks across N processes (where N ~= number of cores), none of them become targets until we're approaching our system-wide brick limit anyway.

+1, I now understand the reasoning behind limiting number of processes. I was in the favor of limiting bricks per process before. 


Makes sense. +1 on this approach from me too. Lets get going with this IMO.

-Amar
 
Thanks, 
Raghavendra Talur 



_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-devel



--
Amar Tumballi (amarts)
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux