>-----Original Message----- >From: Joerg Roedel [mailto:joro@xxxxxxxxxx] >Sent: Monday, May 29, 2017 8:09 PM >To: Nath, Arindam <Arindam.Nath@xxxxxxx>; Lendacky, Thomas ><Thomas.Lendacky@xxxxxxx> >Cc: iommu@xxxxxxxxxxxxxxxxxxxxxxxxxx; amd-gfx@xxxxxxxxxxxxxxxxxxxxx; >Deucher, Alexander <Alexander.Deucher@xxxxxxx>; Bridgman, John ><John.Bridgman@xxxxxxx>; drake@xxxxxxxxxxxx; Suthikulpanit, Suravee ><Suravee.Suthikulpanit@xxxxxxx>; linux@xxxxxxxxxxxx; Craig Stein ><stein12c@xxxxxxxxx>; michel@xxxxxxxxxxx; Kuehling, Felix ><Felix.Kuehling@xxxxxxx>; stable@xxxxxxxxxxxxxxx >Subject: Re: [PATCH] iommu/amd: flush IOTLB for specific domains only (v3) > >Hi Arindam, > >I met Tom Lendacky last week in Nuremberg last week and he told me he is >working on the same area of the code that this patch is for. His reason >for touching this code was to solve some locking problems. Maybe you two >can work together on a joint approach to improve this? Sure Joerg, I will work with Tom. Thanks. > >On Mon, May 22, 2017 at 01:18:01PM +0530, arindam.nath@xxxxxxx wrote: >> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c >> index 63cacf5..1edeebec 100644 >> --- a/drivers/iommu/amd_iommu.c >> +++ b/drivers/iommu/amd_iommu.c >> @@ -2227,15 +2227,26 @@ static struct iommu_group >*amd_iommu_device_group(struct device *dev) >> >> static void __queue_flush(struct flush_queue *queue) >> { >> - struct protection_domain *domain; >> - unsigned long flags; >> int idx; >> >> - /* First flush TLB of all known domains */ >> - spin_lock_irqsave(&amd_iommu_pd_lock, flags); >> - list_for_each_entry(domain, &amd_iommu_pd_list, list) >> - domain_flush_tlb(domain); >> - spin_unlock_irqrestore(&amd_iommu_pd_lock, flags); >> + /* First flush TLB of all domains which were added to flush queue */ >> + for (idx = 0; idx < queue->next; ++idx) { >> + struct flush_queue_entry *entry; >> + >> + entry = queue->entries + idx; >> + >> + /* >> + * There might be cases where multiple IOVA entries for the >> + * same domain are queued in the flush queue. To avoid >> + * flushing the same domain again, we check whether the >> + * flag is set or not. This improves the efficiency of >> + * flush operation. >> + */ >> + if (!entry->dma_dom->domain.already_flushed) { >> + entry->dma_dom->domain.already_flushed = true; >> + domain_flush_tlb(&entry->dma_dom->domain); >> + } >> + } > >There is also a race condition here I have to look into. It is not >introduced by your patch, but needs fixing anyway. I'll look into this >too. > > >Regards, > > Joerg