Re: librbd leaks memory on crushmap updates

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 22.06.22 um 15:46 schrieb Josh Baergen:
Hey Peter,

I found relatively large allocations in the qemu smaps and checked the contents. It contained several hundred repetitions of osd and pool names. We use the default builds on Ubuntu 20.04. Is there a special memory allocator in place that might not clean up properly?
I'm sure you would have noticed this and mentioned it if it was so -
any chance the contents of these regions look like log messages of
some kind? I recently tracked down a high client memory usage that
looked like a leak that turned out to be a broken config option
resulting in higher in-memory log retention:
https://tracker.ceph.com/issues/56093. AFAICT it affects Nautilus+.


Hi Josh, hi Ilya,


it seems we were in fact facing 2 leaks with 14.x. Our long running VMs with librbd 14.x have several million items in the osdmap mempool.

In our testing environment with 15.x I see no unlimited increase in the osdmap mempool (compared this to a second dev host with 14.x client where I see the increase wiht my tests),

but I still see leaking memory when I generate a lot of osdmap changes, but this in fact seem to be log messages - thanks Josh.


So I would appreciate if #56093 would be backported to Octopus before its final release.


Thanks

Peter



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux