Miaohe Lin <linmiaohe@xxxxxxxxxx> writes: > On 2024/3/6 10:52, Huang, Ying wrote: >> Ryan Roberts <ryan.roberts@xxxxxxx> writes: >> >>> There was previously a theoretical window where swapoff() could run and >>> teardown a swap_info_struct while a call to free_swap_and_cache() was >>> running in another thread. This could cause, amongst other bad >>> possibilities, swap_page_trans_huge_swapped() (called by >>> free_swap_and_cache()) to access the freed memory for swap_map. >>> >>> This is a theoretical problem and I haven't been able to provoke it from >>> a test case. But there has been agreement based on code review that this >>> is possible (see link below). >>> >>> Fix it by using get_swap_device()/put_swap_device(), which will stall >>> swapoff(). There was an extra check in _swap_info_get() to confirm that >>> the swap entry was valid. This wasn't present in get_swap_device() so >>> I've added it. I couldn't find any existing get_swap_device() call sites >>> where this extra check would cause any false alarms. >>> >>> Details of how to provoke one possible issue (thanks to David Hilenbrand >>> for deriving this): >>> >>> --8<----- >>> >>> __swap_entry_free() might be the last user and result in >>> "count == SWAP_HAS_CACHE". >>> >>> swapoff->try_to_unuse() will stop as soon as soon as si->inuse_pages==0. >>> >>> So the question is: could someone reclaim the folio and turn >>> si->inuse_pages==0, before we completed swap_page_trans_huge_swapped(). >>> >>> Imagine the following: 2 MiB folio in the swapcache. Only 2 subpages are >>> still references by swap entries. >>> >>> Process 1 still references subpage 0 via swap entry. >>> Process 2 still references subpage 1 via swap entry. >>> >>> Process 1 quits. Calls free_swap_and_cache(). >>> -> count == SWAP_HAS_CACHE >>> [then, preempted in the hypervisor etc.] >>> >>> Process 2 quits. Calls free_swap_and_cache(). >>> -> count == SWAP_HAS_CACHE >>> >>> Process 2 goes ahead, passes swap_page_trans_huge_swapped(), and calls >>> __try_to_reclaim_swap(). >>> >>> __try_to_reclaim_swap()->folio_free_swap()->delete_from_swap_cache()-> >>> put_swap_folio()->free_swap_slot()->swapcache_free_entries()-> >>> swap_entry_free()->swap_range_free()-> >>> ... >>> WRITE_ONCE(si->inuse_pages, si->inuse_pages - nr_entries); >>> >>> What stops swapoff to succeed after process 2 reclaimed the swap cache >>> but before process1 finished its call to swap_page_trans_huge_swapped()? >>> >>> --8<----- >> >> I think that this can be simplified. Even for a 4K folio, this could >> happen. >> >> CPU0 CPU1 >> ---- ---- >> >> zap_pte_range >> free_swap_and_cache >> __swap_entry_free >> /* swap count become 0 */ >> swapoff >> try_to_unuse >> filemap_get_folio >> folio_free_swap >> /* remove swap cache */ >> /* free si->swap_map[] */ >> >> swap_page_trans_huge_swapped <-- access freed si->swap_map !!! > > Sorry for jumping the discussion here. IMHO, free_swap_and_cache is called with pte lock held. > So synchronize_rcu (called by swapoff) will wait zap_pte_range to release the pte lock. So this > theoretical problem can't happen. Or am I miss something? > > CPU0 CPU1 > ---- ---- > > zap_pte_range > pte_offset_map_lock -- spin_lock is held. > free_swap_and_cache > __swap_entry_free > /* swap count become 0 */ > swapoff > try_to_unuse > filemap_get_folio > folio_free_swap > /* remove swap cache */ > percpu_ref_kill(&p->users); > swap_page_trans_huge_swapped > pte_unmap_unlock -- spin_lock is released. > synchronize_rcu(); --> Will wait pte_unmap_unlock to be called? > /* free si->swap_map[] */ I think that you are right. We are safe if PTL is held. Thanks a lot for pointing this out! -- Best Regards, Huang, Ying