Re: [RFC PATCH v6 3/5] block: add a helper function to merge the segments by an IOMMU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 13/06/2019 11:20, Yoshihiro Shimoda wrote:
This patch adds a helper function whether a queue can merge
the segments by an IOMMU.

Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@xxxxxxxxxxx>
---
  block/blk-settings.c   | 28 ++++++++++++++++++++++++++++
  include/linux/blkdev.h |  2 ++
  2 files changed, 30 insertions(+)

diff --git a/block/blk-settings.c b/block/blk-settings.c
index 45f2c52..4e4e13e 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -4,9 +4,11 @@
   */
  #include <linux/bio.h>
  #include <linux/blkdev.h>
+#include <linux/device.h>
  #include <linux/gcd.h>
  #include <linux/gfp.h>
  #include <linux/init.h>
+#include <linux/iommu.h>
  #include <linux/jiffies.h>
  #include <linux/kernel.h>
  #include <linux/lcm.h>
@@ -831,6 +833,32 @@ void blk_queue_write_cache(struct request_queue *q, bool wc, bool fua)
  }
  EXPORT_SYMBOL_GPL(blk_queue_write_cache);
+/**
+ * blk_queue_can_use_iommu_merging - configure queue for merging segments.
+ * @q:		the request queue for the device
+ * @dev:	the device pointer for dma
+ *
+ * Tell the block layer about the iommu merging of @q.
+ */
+bool blk_queue_can_use_iommu_merging(struct request_queue *q,
+				     struct device *dev)
+{
+	struct iommu_domain *domain;
+
+	/*
+	 * If the device DMA is translated by an IOMMU, we can assume
+	 * the device can merge the segments.
+	 */
+	if (!device_iommu_mapped(dev))

Careful here - I think this validates the comment I made when this function was introduced, in that that name doesn't necesarily mean what it sounds like it might mean - "iommu_mapped" was as close as we managed to get to a convenient shorthand for "performs DMA through an IOMMU-API-enabled IOMMU". Specifically, it does not imply that translation is *currently* active; if you boot with "iommu=pt" or equivalent this will still return true even though the device will be using direct/SWIOTLB DMA ops without any IOMMU translation.

Robin.

+		return false;
+
+	domain = iommu_get_domain_for_dev(dev);
+	/* No need to update max_segment_size. see blk_queue_virt_boundary() */
+	blk_queue_virt_boundary(q, iommu_get_minimum_page_size(domain) - 1);
+
+	return true;
+}
+
  static int __init blk_settings_init(void)
  {
  	blk_max_low_pfn = max_low_pfn - 1;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 592669b..4d1f7dc 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1091,6 +1091,8 @@ extern void blk_queue_dma_alignment(struct request_queue *, int);
  extern void blk_queue_update_dma_alignment(struct request_queue *, int);
  extern void blk_queue_rq_timeout(struct request_queue *, unsigned int);
  extern void blk_queue_write_cache(struct request_queue *q, bool enabled, bool fua);
+extern bool blk_queue_can_use_iommu_merging(struct request_queue *q,
+					    struct device *dev);
/*
   * Number of physical segments as sent to the device.




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux