Re: librbd leaks memory on crushmap updates

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> I found relatively large allocations in the qemu smaps and checked the contents. It contained several hundred repetitions of osd and pool names. We use the default builds on Ubuntu 20.04. Is there a special memory allocator in place that might not clean up properly?

I think the promise from the OS could be stated as "pages are
guaranteed to get cleaned before handed over to the next process", but
if that happens at free() or any time in between is probably later
than one thinks. (zero-fill-on-demand is a thing)

Some OSes have page clearing done in a maximum lowprio
process-or-thread that uses otherwise idle CPU time to pre-clean
pages, but if that is a win or not seems to depend a lot on if this
destroys your L1 caches for the running processes and so forth.

So just as with disks, finding memory pages with junk data still in
them is not a huge surprise.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux