[PATCH RFC v1 0/5] Add SCSI per device tagsets

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Currently a single blk_mq_tag_set is associated with a Scsi_Host. When SCSI
controllers are limited, attaching multiple devices to the same controller is
required. In cloud environments with relatively high latency persistent storage,
requiring all devices on a controller to share a single blk_mq_tag_set
negatively impacts performance.

For example: a device provisioned with high IOPS and BW limits on the same
controller as a smaller and slower device can starve the slower device of tags.
This is especially noticeable when the slower device's workload has low I/O
depth tasks.
A common configuration for a journaling database application would be to
configure all I/O except journaling writes to target one device and target the
journaling writes to another device. This can decrease transaction commit
latency and improve performance. However, an I/O-bound database workload, for
example one with a large number of random reads on the device with high
provisioned IOPS, can consume all of the tags in the Scsi_Host tag set,
resulting in poor overall performance as the journaling writes experience high
latency.

Given a 16-core VM with two devices attached to the same controller, the first
with a combined (VM + disk) provisioned bandwidth of 750 MBps and 18000 IOPS
mounted at /mnt/big_device and the second with a combined provisioned bandwidth
of 170 MBps and 3500 IOPS mounted at /mnt/small_device:

The following fio job description demonstrates the benefit of per device tag
sets:

[global]
time_based=1
ioengine=io_uring
direct=1
runtime=1000

[read_hogs]
bs=16k
iodepth=10000
rw=randread
filesize=10G
numjobs=32
directory=/mnt/big_device

[wal]
bs=8k
iodepth=3
filesize=4G
rw=write
numjobs=1
directory=/mnt/small_device

Also note that for this example I have configured:
small device LUN queue depth: 10
large device LUN queue depth: 70
nr_hw_queues: 1
nr_requests: 170

On master, the sequential write job averages around 5 MBps. The random reads hit
the provisioned IOPS on both master and with the patch. With the patch for per
device tag sets, the sequential write job averages 29 MBps -- the same as this
job running alone on the VM.

Open questions and TODOs:

The following open items are for the "Add per device tag sets" patch:

- show_nr_hw_queues() does not work in this implementation. I wasn't
  sure how to access the scsi_device to get the blk_mq_tag_set. I also
  assume there are other sysfs changes that need to be made but I wasn't
  sure which.

- scsi_host_busy(): I've modified scsi_host_busy() such that, when device
  tag sets are in use, its remaining callers will iterate over all the
  attached devices and check their associated blk_mq_tag_sets. I don't
  know if this is the correct thing to do.

  What does the concept of the host being busy mean with device tag
  sets? Does it depend on the caller and the context? Should this form a
  new concept of scsi device busy?

  Also, could this cause deadlocks since this iteration requires the
  host_lock?

  I assume that there still needs to be a concept of host failed
  (Scsi_Host->host_failed) even with device tag sets because if all of
  the inflight requests for a host have failed, then something is
  probably wrong with the controller?

- scsi_host_queue_ready(): I've modified scsi_host_queue_ready() such
  that, when device tag sets are in use, it will only check the device
  tag set when determining starvation. It seemed to me that if a request
  can only acquire a tag from its target device's tag set, then it
  doesn't matter if other tag sets on the host have available tags.

Melanie Plageman (Microsoft) (5):
  scsi: core: Rename host_tagset to hctx_share_tags
  scsi: map_queues() takes tag set instead of host
  scsi: core: Add per device tag sets
  scsi: storvsc: use per device tag sets
  scsi: storvsc: Hardware queues share blk_mq_tags

 drivers/scsi/hisi_sas/hisi_sas_v2_hw.c    |  7 +-
 drivers/scsi/hisi_sas/hisi_sas_v3_hw.c    |  7 +-
 drivers/scsi/hosts.c                      | 32 ++++++---
 drivers/scsi/ibmvscsi/ibmvfc.c            |  2 +-
 drivers/scsi/megaraid/megaraid_sas_base.c |  8 ++-
 drivers/scsi/mpi3mr/mpi3mr_os.c           |  4 +-
 drivers/scsi/mpt3sas/mpt3sas_base.c       |  6 +-
 drivers/scsi/mpt3sas/mpt3sas_scsih.c      |  9 +--
 drivers/scsi/qla2xxx/qla_os.c             |  7 +-
 drivers/scsi/scsi_debug.c                 |  7 +-
 drivers/scsi/scsi_lib.c                   | 30 +++++---
 drivers/scsi/scsi_priv.h                  |  2 +-
 drivers/scsi/scsi_scan.c                  | 30 ++++++--
 drivers/scsi/scsi_sysfs.c                 | 11 ++-
 drivers/scsi/smartpqi/smartpqi_init.c     |  7 +-
 drivers/scsi/storvsc_drv.c                | 86 +++++++++++++++++++++--
 drivers/scsi/virtio_scsi.c                |  5 +-
 include/scsi/scsi_device.h                |  1 +
 include/scsi/scsi_host.h                  | 53 ++++++++++++--
 include/scsi/scsi_tcq.h                   | 15 ++--
 20 files changed, 258 insertions(+), 71 deletions(-)

-- 
2.25.1




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux