Re: librbd leaks memory on crushmap updates

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Am 22.06.2022 um 12:52 schrieb Janne Johansson <icepic.dz@xxxxxxxxx>:
> 
> 
>> 
>> I found relatively large allocations in the qemu smaps and checked the contents. It contained several hundred repetitions of osd and pool names. We use the default builds on Ubuntu 20.04. Is there a special memory allocator in place that might not clean up properly?
> 
> I think the promise from the OS could be stated as "pages are
> guaranteed to get cleaned before handed over to the next process", but
> if that happens at free() or any time in between is probably later
> than one thinks. (zero-fill-on-demand is a thing)
> 
> Some OSes have page clearing done in a maximum lowprio
> process-or-thread that uses otherwise idle CPU time to pre-clean
> pages, but if that is a win or not seems to depend a lot on if this
> destroys your L1 caches for the running processes and so forth.
> 
> So just as with disks, finding memory pages with junk data still in
> them is not a huge surprise.

Hi Janne,

the areas i see are still dirty and allocated to the Qemu process. I am confused by the size they are up to 64MB in size.

As far as I know there is no special allocator on place so I wonder why there are so big allocations.

Peter

> 
> -- 
> May the most significant bit of your life be positive.


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux