Re: Huge memory usage spike in OSD on hammer/giant

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



well I was going by
http://ceph.com/docs/master/start/hardware-recommendations/ and planning for 2GB per OSD so that was a suprise.... maybe there should be warning somewhere ?


On Wed, 9 Sep 2015 12:21:15 +0200, Jan Schermer <jan@xxxxxxxxxxx> wrote:

> The memory gets used for additional PGs on the OSD.
> If you were to "swap" PGs between two OSDs, you'll get memory wasted on both of them because tcmalloc doesn't release it.*
> It usually gets stable after few days even during backfills, so it does get reused if needed.
> If for some reason your OSDs get to 8GB RSS then I recommend you just get more memory, or try disabling tcmalloc which can either help or make it even worse :-)
> 
> * E.g. if you do something silly like "ceph osd crush reweight osd.1 10000" you will see the RSS of osd.28 skyrocket. Reweighting it back down will not release the memory until you do "heap release".
> 
> Jan
> 
> 
> > On 09 Sep 2015, at 12:05, Mariusz Gronczewski <mariusz.gronczewski@xxxxxxxxxxxx> wrote:
> > 
> > On Tue, 08 Sep 2015 16:14:15 -0500, Chad William Seys
> > <cwseys@xxxxxxxxxxxxxxxx> wrote:
> > 
> >> Does 'ceph tell osd.* heap release' help with OSD RAM usage?
> >> 
> >> From
> >> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-August/003932.html
> >> 
> >> Chad.
> > 
> > it did help now, but cluster is in clean state at the moment. But I
> > didnt know that one, thanks.
> > 
> > High memory usage stopped once cluster rebuilt, but I've planned
> > cluster to have 2GB per OSD so I needed to add ram to even get to the
> > point of ceph starting to rebuild, as some OSD ate up to 8 GBs during
> > recover
> > 
> > -- 
> > Mariusz Gronczewski, Administrator
> > 
> > Efigence S. A.
> > ul. Wołoska 9a, 02-583 Warszawa
> > T: [+48] 22 380 13 13
> > F: [+48] 22 380 13 14
> > E: mariusz.gronczewski@xxxxxxxxxxxx
> > <mailto:mariusz.gronczewski@xxxxxxxxxxxx>
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 



-- 
Mariusz Gronczewski, Administrator

Efigence S. A.
ul. Wołoska 9a, 02-583 Warszawa
T: [+48] 22 380 13 13
F: [+48] 22 380 13 14
E: mariusz.gronczewski@xxxxxxxxxxxx
<mailto:mariusz.gronczewski@xxxxxxxxxxxx>

Attachment: pgp5jBh8abZwG.pgp
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux