emperor -> firefly : Significant increase in RAM usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We don't test explicitly for this, but I'm surprised to hear about a
jump of that magnitude. Do you have any more detailed profiling? Can
you generate some? (With the tcmalloc heap dumps.)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com

On Mon, Jul 7, 2014 at 3:03 AM, Sylvain Munaut
<s.munaut at whatever-company.com> wrote:
> Hi,
>
>
>> We actually saw a decrease in memory usage after upgrading to Firefly,
>> though we did reboot the nodes after the upgrade while we had the
>> maintenance window. This is with 216 OSDs total (32-40 per node):
>> http://i.imgur.com/BC7RuXJ.png
>
>
> Interesting. Is that cluster for RBD or RGW ?  My RBD OSDs are a bit
> better behaved but still had this 25% bump in mem usage ...
>
>
>
> Here the memory pretty much just grows continually.
>
> This is the log over the last year.
>
> http://i.imgur.com/0NUFjpz.png
>
> At the very beginning (~250M per process) those OSD were empty, just
> added. Then we changed the crushmap to map all the RGW pools we have
> to them, then it just grows slowly with a bump at pretty much each
> update.
>
> And this is a pretty small pool of OSDs, for theses there is only 8
> OSD processes over 4 nodes, storing barely 1 To in 2.5 millions
> objects, split into 7 pools and 5376 PGs (some pools have size=3,
> other size=2)
>
> 1.5 Go per OSD process seems a bit big to me.
>
>
> Cheers,
>
>    Sylvain
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux