From: Julian Stecklina <jsteckli@xxxxxxxxx> Only the xpfo_kunmap call that needs to actually unmap the page needs to be serialized. We need to be careful to handle the case, where after the atomic decrement of the mapcount, a xpfo_kmap increased the mapcount again. In this case, we can safely skip modifying the page table. Model-checked with up to 4 concurrent callers with Spin. Signed-off-by: Julian Stecklina <jsteckli@xxxxxxxxx> Cc: x86@xxxxxxxxxx Cc: kernel-hardening@xxxxxxxxxxxxxxxxxx Cc: Vasileios P. Kemerlis <vpk@xxxxxxxxxxxxxxx> Cc: Juerg Haefliger <juerg.haefliger@xxxxxxxxxxxxx> Cc: Tycho Andersen <tycho@xxxxxxxxxx> Cc: Marco Benatto <marco.antonio.780@xxxxxxxxx> Cc: David Woodhouse <dwmw2@xxxxxxxxxxxxx> Signed-off-by: Khalid Aziz <khalid.aziz@xxxxxxxxxx> --- mm/xpfo.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/mm/xpfo.c b/mm/xpfo.c index cbfeafc2f10f..dbf20efb0499 100644 --- a/mm/xpfo.c +++ b/mm/xpfo.c @@ -149,22 +149,24 @@ void xpfo_kunmap(void *kaddr, struct page *page) if (!PageXpfoUser(page)) return; - spin_lock(&page->xpfo_lock); - /* * The page is to be allocated back to user space, so unmap it from the * kernel, flush the TLB and tag it as a user page. */ if (atomic_dec_return(&page->xpfo_mapcount) == 0) { -#ifdef CONFIG_XPFO_DEBUG - BUG_ON(PageXpfoUnmapped(page)); -#endif - SetPageXpfoUnmapped(page); - set_kpte(kaddr, page, __pgprot(0)); - xpfo_cond_flush_kernel_tlb(page, 0); - } + spin_lock(&page->xpfo_lock); - spin_unlock(&page->xpfo_lock); + /* + * In the case, where we raced with kmap after the + * atomic_dec_return, we must not nuke the mapping. + */ + if (atomic_read(&page->xpfo_mapcount) == 0) { + SetPageXpfoUnmapped(page); + set_kpte(kaddr, page, __pgprot(0)); + xpfo_cond_flush_kernel_tlb(page, 0); + } + spin_unlock(&page->xpfo_lock); + } } EXPORT_SYMBOL(xpfo_kunmap); -- 2.17.1