[PATCH] iommu/amd: Check PM_LEVEL_SIZE() condition in locked section

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 17, 2019 at 07:36:51AM -0400, Qian Cai wrote:
> 
> 
> > On Oct 16, 2019, at 6:59 PM, Jerry Snitselaar <jsnitsel@xxxxxxxxxx> wrote:
> > 
> > I guess the mode level 6 check is really for other potential callers
> > increase_address_space, none exist at the moment, and the condition
> > of the while loop in alloc_pte should fail if the mode level is 6.
> 
> Because there is no locking around iommu_map_page(), if there are
> several concurrent callers of it for the same domain, could it be that
> it silently corrupt data due to invalid access?

No, that can't happen because increase_address_space locks the domain
before actually doing anything. So the address space can't grow above
domain->mode == 6. But what can happen is that the WARN_ON_ONCE triggers
in there and that the address space is increased multiple times when
only one increase would be sufficient.

To fix this we just need to check the PM_LEVEL_SIZE() condition again
when we hold the lock:

>From e930e792a998e89dfd4feef15fbbf289c45124dc Mon Sep 17 00:00:00 2001
From: Joerg Roedel <jroedel@xxxxxxx>
Date: Fri, 18 Oct 2019 11:34:22 +0200
Subject: [PATCH] iommu/amd: Check PM_LEVEL_SIZE() condition in locked section

The increase_address_space() function has to check the PM_LEVEL_SIZE()
condition again under the domain->lock to avoid a false trigger of the
WARN_ON_ONCE() and to avoid that the address space is increase more
often than necessary.

Reported-by: Qian Cai <cai@xxxxxx>
Fixes: 754265bcab78 ("iommu/amd: Fix race in increase_address_space()")
Signed-off-by: Joerg Roedel <jroedel@xxxxxxx>
---
 drivers/iommu/amd_iommu.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 2369b8af81f3..a0639e511ffe 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -1463,6 +1463,7 @@ static void free_pagetable(struct protection_domain *domain)
  * to 64 bits.
  */
 static bool increase_address_space(struct protection_domain *domain,
+				   unsigned long address,
 				   gfp_t gfp)
 {
 	unsigned long flags;
@@ -1471,8 +1472,8 @@ static bool increase_address_space(struct protection_domain *domain,
 
 	spin_lock_irqsave(&domain->lock, flags);
 
-	if (WARN_ON_ONCE(domain->mode == PAGE_MODE_6_LEVEL))
-		/* address space already 64 bit large */
+	if (address <= PM_LEVEL_SIZE(domain->mode) ||
+	    WARN_ON_ONCE(domain->mode == PAGE_MODE_6_LEVEL))
 		goto out;
 
 	pte = (void *)get_zeroed_page(gfp);
@@ -1505,7 +1506,7 @@ static u64 *alloc_pte(struct protection_domain *domain,
 	BUG_ON(!is_power_of_2(page_size));
 
 	while (address > PM_LEVEL_SIZE(domain->mode))
-		*updated = increase_address_space(domain, gfp) || *updated;
+		*updated = increase_address_space(domain, address, gfp) || *updated;
 
 	level   = domain->mode - 1;
 	pte     = &domain->pt_root[PM_LEVEL_INDEX(level, address)];
-- 
2.16.4




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux