On 07.12.22 21:30, Peter Xu wrote:
Since walk_hugetlb_range() walks the pgtable, it needs the vma lock
to make sure the pgtable page will not be freed concurrently.
Reviewed-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx>
Signed-off-by: Peter Xu <peterx@xxxxxxxxxx>
---
arch/s390/mm/gmap.c | 2 ++
fs/proc/task_mmu.c | 2 ++
include/linux/pagewalk.h | 11 ++++++++++-
mm/hmm.c | 15 ++++++++++++++-
mm/pagewalk.c | 2 ++
5 files changed, 30 insertions(+), 2 deletions(-)
diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index 8947451ae021..292a54c490d4 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -2643,7 +2643,9 @@ static int __s390_enable_skey_hugetlb(pte_t *pte, unsigned long addr,
end = start + HPAGE_SIZE - 1;
__storage_key_init_range(start, end);
set_bit(PG_arch_1, &page->flags);
+ hugetlb_vma_unlock_read(walk->vma);
cond_resched();
+ hugetlb_vma_lock_read(walk->vma);
return 0;
}
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index e35a0398db63..cf3887fb2905 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -1613,7 +1613,9 @@ static int pagemap_hugetlb_range(pte_t *ptep, unsigned long hmask,
frame++;
}
+ hugetlb_vma_unlock_read(walk->vma);
cond_resched();
+ hugetlb_vma_lock_read(walk->vma);
We already hold the mmap_lock and reschedule. Even without the
cond_resched() we might happily reschedule on a preemptive kernel. So
I'm not sure if this optimization is strictly required or even helpful
in practice here.
In the worst case, concurrent unsharing would have to wait.
For example, s390_enable_skey() is called at most once for a VM, for
most VMs it gets never even called.
Or am I missing something important?
--
Thanks,
David / dhildenb