On 18.07.21 06:30, Qi Zheng wrote:
Hi,
This patch series aims to free user PTE page table pages when all PTE entries
are empty.
The beginning of this story is that some malloc libraries(e.g. jemalloc or
tcmalloc) usually allocate the amount of VAs by mmap() and do not unmap those VAs.
They will use madvise(MADV_DONTNEED) to free physical memory if they want.
But the page tables do not be freed by madvise(), so it can produce many
page tables when the process touches an enormous virtual address space.
... did you see that I am actually looking into this?
https://lkml.kernel.org/r/bae8b967-c206-819d-774c-f57b94c4b362@xxxxxxxxxx
and have already spent a significant time on it as part of my research,
which is *really* unfortunate and makes me quite frustrated at the
beginning of the week alreadty ...
Ripping out page tables is quite difficult, as we have to stop all page
table walkers from touching it, including the fast_gup, rmap and page
faults. This usually involves taking the mmap lock in write. My approach
does page table reclaim asynchronously from another thread and do not
rely on reference counts.
--
Thanks,
David / dhildenb