On Thu, 29 Sep 2022 19:53:34 +0200 David Hildenbrand <david@xxxxxxxxxx> wrote: > On 29.09.22 14:05, Claudio Imbrenda wrote: > > On Thu, 29 Sep 2022 13:12:44 +0200 > > David Hildenbrand <david@xxxxxxxxxx> wrote: > > > >> On 29.09.22 12:36, Claudio Imbrenda wrote: > >>> On Thu, 29 Sep 2022 11:21:44 +0200 > >>> David Hildenbrand <david@xxxxxxxxxx> wrote: > >>> > >>>> On 29.09.22 04:52, xu.xin.sc@xxxxxxxxx wrote: > >>>>> From: xu xin <xu.xin16@xxxxxxxxxx> > >>>>> > >>>>> Before enabling use_zero_pages by setting /sys/kernel/mm/ksm/ > >>>>> use_zero_pages to 1, pages_sharing of KSM is basically accurate. But > >>>>> after enabling use_zero_pages, all empty pages that are merged with > >>>>> kernel zero page are not counted in pages_sharing or pages_shared. > >>>>> That is because the rmap_items of these ksm zero pages are not > >>>>> appended to The Stable Tree of KSM. > >>>>> > >>>>> We need to add the count of empty pages to let users know how many empty > >>>>> pages are merged with kernel zero page(s). > >>>>> > >>>>> Please see the subsequent patches for details. > >>>> > >>>> Just raising the topic here because it's related to the KSM usage of the > >>>> shared zero-page: > >>>> > >>>> MADV_UNMERGEABLE and other ways to trigger unsharing will *not* unshare > >>>> the shared zeropage as placed by KSM (which is against the > >>>> MADV_UNMERGEABLE documentation at least). It will only unshare actual > >>>> KSM pages. We might not want want to blindly unshare all shared > >>>> zeropages in applicable VMAs ... using a dedicated shared zero (KSM) > >>>> page -- instead of the generic zero page -- might be one way to handle > >>>> this cleaner. > >>> > >>> I don't understand why do you need this. > >>> > >>> first of all, one zero page would not be enough (depending on the > >>> architecture, e.g. on s390x you need many). the whole point of zero > >>> page merging is that one zero page is not enough. > >> > >> I don't follow. Having multiple ones is a pure optimization on s390x (I > >> recall something about cache coloring), no? So why should we blindly > >> care in the special KSM use case here? > > > > because merging pages full of zeroes with only one page will have > > negative performance on those architectures that need cache colouring > > (and s390 is not even the only architecture that needs it) > > > > the whole point of merging pages full of zeroes with zero pages is to > > not lose the cache colouring. > > > > otherwise you could just let KSM merge all pages full of zeroes with > > one page (which is what happens without use_zero_pages), and all the > > numbers are correct. > > > > if you are not on s390 or MIPS, you have no use for use_zero_pages > > Ah, I see now that use_zero_pages is really only (mostly) s390x > specific. I already wondered why on earth we would really need that, > thanks for pointing that out. > > One question I'd have is: why is the shared zero page treated special in > KSM then *at all*. Cache coloring problem should apply to *each and > every* deduplicated page. true, but unsurprisingly the zero page is the most common one. e.g. if you have a very big and very sparse matrix, you will read lots of consecutive pages of zeroes. there is also a more important issue with VMs, which is actually the reason of the feature (see below) in general it's unlikely that you will read lots of consecutive pages with the exact same non-zero content. > > Why is a page filled with 0xff any different from a page filled with 0x0? without use_zero_pages, the multiple zero pages in a KVM guest will be merged into one single page in the host, so the guest will lose the benefits of coloured zero pages. unsurprisingly this has a big impact on performance. > > Yes, I read e86c59b1b12d. It doesn't mention any actual performance > numbers and if the performance only applies to some microbenchmarks > nobody cares about. that feature was implemented because of customer feedback. aka some users hit the problem IRL. > > Did you post some benchmarks results back then? That would be no, and I don't have the numbers at hand right now, but I remember it was a very significant difference in the benchmark. > interesting. I assume that the shared zeropage was simply the low > hanging fruit. of course a very complex system could be implemented to merge pages in different buckets based on the "colour"; the result would probably be that nothing is shared. KSM is a tradeoff between memory consumption and CPU time. fixing zero pages brings speed advantages (on architectures with coloured zero pages) without sacrificing memory savings (on any architecture) > > > > >> > >>> > >>> second, once a page is merged with a zero page, it's not really handled > >>> by KSM anymore. if you have a big allocation, of which you only touch a > >>> few pages, would the rest be considered "merged"? no, it's just zero > >>> pages, right? > >> > >> If you haven't touched memory, there is nothing populated -- no shared > >> zeropage. > >> > >> We only populate shared zeropages in private anonymous mappings on read > >> access without prior write. > > > > that's what I meant. if you read without writing, you get zero pages. > > you don't consider those to be "shared" from a KSM point of view > > > > does it make a difference if some pages that have been written to but > > now only contain zeroes are discarded and mapped back to the zero pages? > > That's a good question. When it comes to unmerging, you'd might expect > that whatever was deduplicated will get duplicated again -- and your > memory consumption will adjust accordingly. The stats might give an > admin an idea regarding how much memory is actually overcommited. See > below on the important case where we essentially never see the shared > zeropage. > > The motivation behind these patches would be great -- what is the KSM > user and what does it want to achieve with these numbers? anyone who works on big amounts of very sparse data, especially in a VM (as I explained above, with KSM without use_zero_pages KVM guests lose the zero page colouring) > > > > >> > >>> this is the same, except that we take present pages with zeroes in it > >>> and we discard them and map them to zero pages. it's kinda like if we > >>> had never touched them. > >> > >> MADV_UNMERGEABLE > >> > >> "Undo the effect of an earlier MADV_MERGEABLE operation on the > >> specified address range; KSM unmerges whatever pages it had merged in > >> the address range specified by addr and length." > >> > >> Now please explain to me how not undoing a zeropage merging is correct > >> according to this documentation. > >> > > > > because once it's discarded and replaced with a zero page, the page is > > not handled by KSM anymore. > > > > I understand what you mean, that KSM did an action that now cannot be > > undone, but how would you differentiate between zero pages that were > > never written to and pages that had been written to and then discarded > > and mapped back to a zero page because they only contained zeroes? > > An application that always properly initializes (write at least some > part once) all its memory will never have the shared zeropage mapped. VM > guest memory comes to mind, probably still the most important KSM use case. > > There are currently some remaining issues when taking a GUP R/O longterm > pin on such a page (e.g., vfio). In contrast to KSM pages, such pins are > not reliable for the shared zeropage, but I have fixes for them pending. > However, that is rather a corner case (it didn't work at all correctly a > while ago) and will be sorted out soon. > > So the question is if MADV_UNMERGEABLE etc. (stats) should be adjusted > to document the behavior with use_zero_pages accordingly. we can count how many times a page full of zeroes was merged with a zero-page, but we can't count how many time one of those pages was then unmerged. once it's merged it becomes a zero-page, like the others. the documentation probably can be fixed to explain what's going on