On Tue, May 24, 2022 at 01:50:03PM -0700, Mike Kravetz wrote: > The routine huge_pmd_unshare is passed a pointer to an address > associated with an area which may be unshared. If unshare is successful > this address is updated to 'optimize' callers iterating over huge page > addresses. For the optimization to work correctly, address should be > updated to the last huge page in the unmapped/unshared area. However, > in the common case where the passed address is PUD_SIZE aligned, the > address is incorrectly updated to the address of the preceding huge > page. That wastes CPU cycles as the unmapped/unshared range is scanned > twice. > > Cc: <stable@xxxxxxxxxxxxxxx> > Fixes: 39dde65c9940 ("shared page table for hugetlb page") > Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Acked-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> Thanks.