On 20/01/2025 18:11, Rik van Riel wrote:
On Mon, 2025-01-20 at 16:14 +0200, Nadav Amit wrote:
I am not sure we are on the same page. What I suggested is:
1. arch_tlbbatch_flush() would still do tlbsync()
2. No migrate_enable() in arch_tlbbatch_flush()
3. No migrate_disable() in arch_tlbbatch_add_pending()
4. arch_tlbbatch_add_pending() sets
cpu_tlbstate.pending_tlb_broadcast
5. switch_mm_irqs_off() checks cpu_tlbstate.pending_tlb_broadcast and
if
it is set performs tlbsync and clears it.
How does that synchronize the page freeing (from page
reclaim) with the TLBSYNCs?
What guarantees that the page reclaim path won't free
the pages until after TLBSYNC has completed on the CPUs
that kicked off asynchronous flushes with INVPLGB?
[ you make me lose my confidence, although I see nothing wrong ]
Freeing the pages must be done after the TLBSYNC. I did not imply it
needs to be changed.
The page freeing (and reclaim) path is only initiated after
arch_tlbbatch_flush() is completed. If no migration is initiated, since
we did not remove any tlbsync, it should be fine.
If migration was initiated, and some invlpgb's were already initiated,
then we need for correctness to initiate tlbsync before the task might
be scheduled on another core. That's exactly why adding tlbsync to
switch_mm_irqs_off() is needed in such a case.