librbd leaks memory on crushmap updates

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,


we noticed that some of our long running VMs (1 year without migration) seem to have a very slow memory leak. Taking a dump of the leaked memory revealed that it seemed to contain osd and pool information so we

concluded that it must have something to do with crush map updates. We then wrote a test script in our dev environment that constantly takes out osds and kicks then back in as soon as all remappings are done.

With that script running the PSS usage of the Qemu process is constantly increasing (main memory of the VM is in hugetblfs) in an order of about 5MB / day for a very small dev cluster with approx. 40 OSDs and 5 pools.

We have observed this issue first with Nautilus 14.2.22 and then also tried Octopus 15.2.16 where some issues #38403 should have been fixed.


Any ideas except from migrating VMs when PSS usage gets too high?


Thanks

Peter



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux