Re: [bug report] NVMe/IB: reset_controller need more than 1min

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 3/21/2022 11:28 AM, Sagi Grimberg wrote:

WDYT ? should we reconsider the "nvme connect --with_metadata" option ?

Maybe you can make these lazily allocated?

You mean something like:

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index fd4720d37cc0..367ba0bb62ab 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1620,10 +1620,19 @@ int nvme_getgeo(struct block_device *bdev, struct hd_geometry *geo)
  }

  #ifdef CONFIG_BLK_DEV_INTEGRITY
-static void nvme_init_integrity(struct gendisk *disk, u16 ms, u8 pi_type,
-                               u32 max_integrity_segments)
+static int nvme_init_integrity(struct gendisk *disk, struct nvme_ns *ns)
  {
         struct blk_integrity integrity = { };
+       u16 ms = ns->ms;
+       u8 pi_type = ns->pi_type;
+       u32 max_integrity_segments = ns->ctrl->max_integrity_segments;
+       int ret;
+
+       if (ns->ctrl->ops->init_integrity) {
+               ret = ns->ctrl->ops->init_integrity(ns->ctrl);
+               if (ret)
+                       return ret;
+       }

         switch (pi_type) {
         case NVME_NS_DPS_PI_TYPE3:
@@ -1644,11 +1653,13 @@ static void nvme_init_integrity(struct gendisk *disk, u16 ms, u8 pi_type,
         integrity.tuple_size = ms;
         blk_integrity_register(disk, &integrity);
         blk_queue_max_integrity_segments(disk->queue, max_integrity_segments);
+
+       return 0;
  }
  #else
-static void nvme_init_integrity(struct gendisk *disk, u16 ms, u8 pi_type,
-                               u32 max_integrity_segments)
+static void nvme_init_integrity(struct gendisk *disk, struct nvme_ns *ns)
  {
+       return 0;
  }
  #endif /* CONFIG_BLK_DEV_INTEGRITY */

@@ -1853,8 +1864,8 @@ static void nvme_update_disk_info(struct gendisk *disk,
         if (ns->ms) {
                 if (IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY) &&
                     (ns->features & NVME_NS_METADATA_SUPPORTED))
-                       nvme_init_integrity(disk, ns->ms, ns->pi_type,
- ns->ctrl->max_integrity_segments);
+                       if (nvme_init_integrity(disk, ns))
+                               capacity = 0;
                 else if (!nvme_ns_has_pi(ns))
                         capacity = 0;
         }
@@ -4395,7 +4406,7 @@ EXPORT_SYMBOL_GPL(nvme_stop_ctrl);


and create the resources for the first namespace we find as PI formatted ?



I was thinking more along the lines of allocating it as soon as an I/O
comes with pi... Is there something internal to the driver that can
be done in parallel to expedite the allocation of these extra resources?

Since when are we allocating things in the fast path ?

We allocate a pool of MRs per queue. Not MR per task.

Do you think it's better to allocate the whole pool in the first PI IO and pay the latency for this IO ?

-Max.





[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux