On 2023/11/13 16:32, David Hildenbrand wrote:
On 09.11.23 08:09, Kefeng Wang wrote:
On 2023/11/8 21:59, Matthew Wilcox wrote:
On Wed, Nov 08, 2023 at 09:40:09AM +0800, Kefeng Wang wrote:
On 2023/11/7 22:24, Matthew Wilcox wrote:
On Tue, Nov 07, 2023 at 09:52:11PM +0800, Kefeng Wang wrote:
struct page *ksm_might_need_to_copy(struct page *page,
- struct vm_area_struct *vma, unsigned long address)
+ struct vm_area_struct *vma, unsigned long addr)
{
struct folio *folio = page_folio(page);
struct anon_vma *anon_vma = folio_anon_vma(folio);
- struct page *new_page;
+ struct folio *new_folio;
- if (PageKsm(page)) {
- if (page_stable_node(page) &&
+ if (folio_test_ksm(folio)) {
+ if (folio_stable_node(folio) &&
!(ksm_run & KSM_RUN_UNMERGE))
return page; /* no need to copy it */
} else if (!anon_vma) {
return page; /* no need to copy it */
- } else if (page->index == linear_page_index(vma, address) &&
+ } else if (page->index == linear_page_index(vma, addr) &&
Hmm. page->index is going away. What should we do here instead?
Do you mean to replace page->index to folio->index, or kill index from
struct page?
I'm asking you what we should do.
Tail pages already don't have a valid ->index (or ->mapping).
So presumably we can't see a tail page here today. But will we in
future?
I think we could replace page->index to page_to_pgoff(page).
What the second part of that code does is check whether a page might
have been a KSM page before swapout.
Once a KSM page is swapped out, we lose the KSM marker. To recover, we
have to check whether the new page logically "fits" into the VMA.
Large folios are never KSM folios, and we only swap in small folios (and
in the future, once we would swap in large folios, they couldn't have
been KSM folios before).
So you could return early in the function if we have a large folio and
make all operations based on the (small) folio.
Sure, I will add folio_test_large check ahead and convert page->index to
folio->index, and adjust the logical if ksm and swapin support large
folio, thanks.