On 17.11.23 00:47, Barry Song wrote:
On Thu, Nov 16, 2023 at 5:36 PM David Hildenbrand <david@xxxxxxxxxx> wrote:
On 15.11.23 21:49, Barry Song wrote:
On Wed, Nov 15, 2023 at 11:16 PM David Hildenbrand <david@xxxxxxxxxx> wrote:
On 14.11.23 02:43, Barry Song wrote:
This patch makes MTE tags saving and restoring support large folios,
then we don't need to split them into base pages for swapping out
on ARM64 SoCs with MTE.
arch_prepare_to_swap() should take folio rather than page as parameter
because we support THP swap-out as a whole.
Meanwhile, arch_swap_restore() should use page parameter rather than
folio as swap-in always works at the granularity of base pages right
now.
... but then we always have order-0 folios and can pass a folio, or what
am I missing?
Hi David,
you missed the discussion here:
https://lore.kernel.org/lkml/CAGsJ_4yXjex8txgEGt7+WMKp4uDQTn-fR06ijv4Ac68MkhjMDw@xxxxxxxxxxxxxx/
https://lore.kernel.org/lkml/CAGsJ_4xmBAcApyK8NgVQeX_Znp5e8D4fbbhGguOkNzmh1Veocg@xxxxxxxxxxxxxx/
Okay, so you want to handle the refault-from-swapcache case where you get a
large folio.
I was mislead by your "folio as swap-in always works at the granularity of
base pages right now" comment.
What you actually wanted to say is "While we always swap in small folios, we
might refault large folios from the swapcache, and we only want to restore
the tags for the page of the large folio we are faulting on."
But, I do if we can't simply restore the tags for the whole thing at once
at make the interface page-free?
Let me elaborate:
IIRC, if we have a large folio in the swapcache, the swap entries/offset are
contiguous. If you know you are faulting on page[1] of the folio with a
given swap offset, you can calculate the swap offset for page[0] simply by
subtracting from the offset.
See page_swap_entry() on how we perform this calculation.
So you can simply pass the large folio and the swap entry corresponding
to the first page of the large folio, and restore all tags at once.
So the interface would be
arch_prepare_to_swap(struct folio *folio);
void arch_swap_restore(struct page *folio, swp_entry_t start_entry);
I'm sorry if that was also already discussed.
This has been discussed. Steven, Ryan and I all don't think this is a good
option. in case we have a large folio with 16 basepages, as do_swap_page
can only map one base page for each page fault, that means we have
to restore 16(tags we restore in each page fault) * 16(the times of page faults)
for this large folio.
Can't you remember that it's already been restored? That seems like a
reasonable thing to have.
For large folios we have plenty of page flags in tail pages available?
--
Cheers,
David / dhildenb