On Sun, May 21, 2023 at 09:52:31PM -0700, Hugh Dickins wrote: > pte_offset_map() was still using kmap_atomic(): update it to the > preferred kmap_local_page() before making further changes there, in case > we need this as a bisection point; but I doubt it can cause any trouble. > > Signed-off-by: Hugh Dickins <hughd@xxxxxxxxxx> > --- > include/linux/pgtable.h | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h > index 8ec27fe69dc8..94235ff2706e 100644 > --- a/include/linux/pgtable.h > +++ b/include/linux/pgtable.h > @@ -96,9 +96,9 @@ static inline pte_t *pte_offset_kernel(pmd_t *pmd, unsigned long address) > > #if defined(CONFIG_HIGHPTE) > #define pte_offset_map(dir, address) \ > - ((pte_t *)kmap_atomic(pmd_page(*(dir))) + \ > + ((pte_t *)kmap_local_page(pmd_page(*(dir))) + \ > pte_index((address))) > -#define pte_unmap(pte) kunmap_atomic((pte)) > +#define pte_unmap(pte) kunmap_local((pte)) > #else > #define pte_offset_map(dir, address) pte_offset_kernel((dir), (address)) > #define pte_unmap(pte) ((void)(pte)) /* NOP */ (I think this could be a dumb question if this patch has been running there for years downstream, but still..) I assume one major difference of using kmap_local() is page fault will not be disabled, while kmap_atomic() will. Meanwhile, pte_offset_map() is also used by pte_offset_map_lock(), which means before this patch CONFIG_HIGHPTE systems will disable pgfault before taking pgtable lock for it, while it will stop doing so after the change. Then what happens if a page fault happens on the same pgtable lock range that is already taken by the thread context? What stops the deadlock from happening? Thanks, -- Peter Xu