On 2011-05-18 03:57, Alex Williamson wrote: > On Tue, 2011-05-17 at 13:41 -0600, Alex Williamson wrote: >> On Mon, 2011-05-16 at 09:13 -0600, Alex Williamson wrote: >>> On Mon, 2011-05-09 at 15:48 +0100, David Woodhouse wrote: >>>> On Thu, 2011-05-05 at 18:13 -0700, Yinghai Lu wrote: >>>>> @@ -3252,6 +3252,9 @@ static int device_notifier(struct notifi >>>>> return 0; >>>>> >>>>> if (action == BUS_NOTIFY_UNBOUND_DRIVER && !iommu_pass_through) { >>>>> + /* before we remove dev with domain, flush IOTLB */ >>>>> + flush_unmaps(); >>>>> + >>>>> domain_remove_one_dev_info(domain, pdev); >>>>> >>>>> if (!(domain->flags & DOMAIN_FLAG_VIRTUAL_MACHINE) && >>>> >>>> That calls flush_unmaps() without the async_umap_flush_lock held, >>>> doesn't it? A few days ago I asked someone else to test this candidate >>>> patch for a similar issue: >>>> >>>> http://david.woodhou.se/flush-unmaps-on-unbind.patch >>> >>> Copying here: >>> >>>> diff --git a/drivers/pci/intel-iommu.c b/drivers/pci/intel-iommu.c >>>> index d552d2c..7e606d6 100644 >>>> --- a/drivers/pci/intel-iommu.c >>>> +++ b/drivers/pci/intel-iommu.c >>>> @@ -3256,8 +3259,10 @@ static int device_notifier(struct notifier_block *nb, >>>> >>>> if (!(domain->flags & DOMAIN_FLAG_VIRTUAL_MACHINE) && >>>> !(domain->flags & DOMAIN_FLAG_STATIC_IDENTITY) && >>>> - list_empty(&domain->devices)) >>>> + list_empty(&domain->devices)) { >>>> + flush_unmaps_timeout(0); >>>> domain_exit(domain); >>>> + } >>>> } >>>> >>>> return 0; >>>> @@ -3587,6 +3592,7 @@ static void intel_iommu_domain_destroy(struct iommu_domain *domain) >>>> struct dmar_domain *dmar_domain = domain->priv; >>>> >>>> domain->priv = NULL; >>>> + flush_unmaps_timeout(0); >>>> vm_domain_exit(dmar_domain); >>>> } >>> >>> David, would it be worthwhile to push the unmaps into the >>> {vm_}domain_exit() functions to avoid races like this in the future? I >>> can verify the above resolves a panic after unbinding a device from >>> snd_hda_intel that I hit recently. Do you plan to push this for .39? >> >> BTW, is this second chunk really needed? VM iommu mappings don't seem >> to use the lazy unmap path. Thanks, > > David, what do you think of this instead? Thanks, > > Alex > > intel-iommu: Flush unmaps at domain_exit > > From: Alex Williamson <alex.williamson@xxxxxxxxxx> > > We typically batch unmaps to be lazily flushed out at > regular intervals. When we destroy a domain, we need > to force a flush of these lazy unmaps to be sure none > reference the domain we're about to free. > > Signed-off-by: Alex Williamson <alex.williamson@xxxxxxxxxx> > --- > > drivers/pci/intel-iommu.c | 4 ++++ > 1 files changed, 4 insertions(+), 0 deletions(-) > > > diff --git a/drivers/pci/intel-iommu.c b/drivers/pci/intel-iommu.c > index d552d2c..b04f84e 100644 > --- a/drivers/pci/intel-iommu.c > +++ b/drivers/pci/intel-iommu.c > @@ -1416,6 +1416,10 @@ static void domain_exit(struct dmar_domain *domain) > if (!domain) > return; > > + /* Flush any lazy unmaps that may reference this domain */ > + if (!intel_iommu_strict) > + flush_unmaps_timeout(0); > + > domain_remove_dev_info(domain); > /* destroy iovas */ > put_iova_domain(&domain->iovad); > > Can we get this resolved for 2.6.39.1? Via David's patch, or Alex's (which works for me), or whatever. It's a really annoying regression of .39 when you want to run with IOMMU enabled. Thanks, Jan
Attachment:
signature.asc
Description: OpenPGP digital signature