On 22/05/17 10:08 AM, Michel Dänzer wrote: > On 19/05/17 07:02 PM, arindam.nath at amd.com wrote: >> From: Arindam Nath <arindam.nath at amd.com> >> >> Change History >> -------------- >> >> v2: changes suggested by Joerg >> - add flush flag to improve efficiency of flush operation >> >> v1: >> - The idea behind flush queues is to defer the IOTLB flushing >> for domains for which the mappings are no longer valid. We >> add such domains in queue_add(), and when the queue size >> reaches FLUSH_QUEUE_SIZE, we perform __queue_flush(). >> >> Since we have already taken lock before __queue_flush() >> is called, we need to make sure the IOTLB flushing is >> performed as quickly as possible. >> >> In the current implementation, we perform IOTLB flushing >> for all domains irrespective of which ones were actually >> added in the flush queue initially. This can be quite >> expensive especially for domains for which unmapping is >> not required at this point of time. >> >> This patch makes use of domain information in >> 'struct flush_queue_entry' to make sure we only flush >> IOTLBs for domains who need it, skipping others. >> >> Suggested-by: Joerg Roedel <joro at 8bytes.org> >> Signed-off-by: Arindam Nath <arindam.nath at amd.com> > > Please add these tags: > > Fixes: b1516a14657a ("iommu/amd: Implement flush queue") > Cc: stable at vger.kernel.org Also Bugzilla: https://bugs.freedesktop.org/101029 -- Earthling Michel Dänzer | http://www.amd.com Libre software enthusiast | Mesa and X developer