Re: Memory leak in Ceph OSD?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I retract my previous statement(s).

My current suspicion is that this isn't a leak as much as it being load-driven, after enough waiting - it generally seems to settle around some equilibrium. We do seem to sit on the mempools x 2.4 ~ ceph-osd RSS, which is on the higher side (I see documentation alluding to expecting ~1.5x).

-KJ 

On Mon, Mar 19, 2018 at 3:05 AM, Konstantin Shalygin <k0ste@xxxxxxxx> wrote:

We don't run compression as far as I know, so that wouldn't be it. We do
actually run a mix of bluestore & filestore - due to the rest of the
cluster predating a stable bluestore by some amount.


12.2.2 -> 12.2.4 at 2018/03/10: I don't see increase of memory usage. No any compressions of course.


http://storage6.static.itmages.com/i/18/0319/h_1521453809_9131482_859b1fb0a5.png




k



--
Kjetil Joergensen <kjetil@xxxxxxxxxxxx>
SRE, Medallia Inc
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux