Re: [PATCH 1/1] scsi core: limit overhead of device_busy counter for SSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 20, 2019 at 12:29 AM Ming Lei <ming.lei@xxxxxxxxxx> wrote:
>
> On Tue, Nov 19, 2019 at 12:07:59PM -0800, Sumanesh Samanta wrote:
> > From: root <sumanesh.samanta@xxxxxxxxxxxx>
> >
> > Recently a patch was delivered to remove host_busy counter from SCSI mid layer. That was a major bottleneck, and helped improve SCSI stack performance.
> > With that patch, bottle neck moved to the scsi_device device_busy counter. The performance issue with this counter is seen more in cases where a single device can produce very high IOPs, for example h/w RAID devices where OS sees one device, but there are many drives behind it, thus being capable of very high IOPs. The effect is also visible when cores from multiple NUMA nodes send IO to the same device or same controller.
> > The device_busy counter is not needed by controllers which can manage as many IO as submitted to it. Rotating media still uses it for merging IO, but for non-rotating SSD drives it becomes a major bottleneck as described above.
> >
> > A few weeks back, a patch was provided to address the device_busy counter also but unfortunately that had some issues:
> > 1. There was a functional issue discovered:
> > https://lists.01.org/hyperkitty/list/lkp@xxxxxxxxxxxx/thread/VFKDTG4XC4VHWX5KKDJJI7P36EIGK526/
> > 2. There was some concern about existing drivers using the device_busy counter.
>
> There are only two drivers(mpt3sas and megaraid_sas) which uses this
> counter. And there are two types of usage:
>
> 1) both use .device_busy to balance interrupt load among LUNs in
> fast path
>
> 2) mpt3sas uses .device_busy in its device reset handler(slow path), and
> this kind of usage can be replaced by blk_mq_queue_tag_busy_iter()
> easily.
>
> IMO, blk-mq has already considered IO load balance, see
> hctx_may_queue(), meantime managed IRQ can balance IO completion load
> among each IRQ vectors, not see obvious reason for driver to do that
> any more.
>
> However, if the two drivers still want to do that, I'd suggest to implement
> it inside the driver, and no reason to re-invent generic wheels just for
> two drivers.
>
> That is why I replace .device_busy uses in the two drivers with private
> counters in the patches posted days ago:
>
> https://lore.kernel.org/linux-scsi/20191118103117.978-1-ming.lei@xxxxxxxxxx/T/#t
>

Agreed, a private counter should be good enough.

> And if drivers thought the private counter isn't good enough, they can
> improve it in any way, such as this percpu approach, or even kill them.
>

I was more concerned about the functional issue discovered in the
earlier patch and provided mine as an alternative without any side
effect or functional issue, since it does not modify any core logic.
Having said that, if your latest patch goes through and is accepted,
then agree that my patch is not needed. If however, some issue is
discovered in your latest patch, then I would request my patch to be
considered as an alternative, so that the device_busy counter overhead
can be avoided

> >
> > This patch is an attempt to address both the above issues.
> > For this patch to be effective, LLDs need to set a specific flag use_per_cpu_device_busy in the scsi_host_template. For other drivers ( who does not set the flag), this patch would be a no-op, and should not affect their performance or functionality at all.
> >
> > Also, this patch does not fundamentally change any logic or functionality of the code. All it does is replace device_busy with a per CPU counter. In fast path, all cpu increment/decrement their own counter. In relatively slow path. they call scsi_device_busy function to get the total no of IO outstanding on a device. Only functional aspect it changes is that for non-rotating media, the number of IO to a device is not restricted. Controllers which can handle that, can set the use_per_cpu_device_busy flag in scsi_host_template to take advantage of this patch. Other controllers need not modify any code and would work as usual.
> > Since the patch does not modify any other functional aspects, it should not have any side effects even for drivers that do set the use_per_cpu_device_busy flag.
> > ---
> >  drivers/scsi/scsi_lib.c    | 151 ++++++++++++++++++++++++++++++++++---
> >  drivers/scsi/scsi_scan.c   |  16 ++++
> >  drivers/scsi/scsi_sysfs.c  |   9 ++-
> >  drivers/scsi/sg.c          |   2 +-
> >  include/scsi/scsi_device.h |  15 ++++
> >  include/scsi/scsi_host.h   |  16 ++++
> >  6 files changed, 197 insertions(+), 12 deletions(-)
> >
> > diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
> > index 2563b061f56b..5dc392914f9e 100644
> > --- a/drivers/scsi/scsi_lib.c
> > +++ b/drivers/scsi/scsi_lib.c
> > @@ -52,6 +52,12 @@
> >  #define  SCSI_INLINE_SG_CNT  2
> >  #endif
> >
> > +#define MAX_PER_CPU_COUNTER_ABSOLUTE_VAL (0xFFFFFFFFFFF)
> > +#define PER_CPU_COUNTER_OK_VAL (MAX_PER_CPU_COUNTER_ABSOLUTE_VAL>>16)
> > +#define USE_DEVICE_BUSY(sdev)        (!(sdev)->host->hostt->use_per_cpu_device_busy \
> > +                             || !blk_queue_nonrot((sdev)->request_queue))
> > +
> > +
> >  static struct kmem_cache *scsi_sdb_cache;
> >  static struct kmem_cache *scsi_sense_cache;
> >  static struct kmem_cache *scsi_sense_isadma_cache;
> > @@ -65,6 +71,111 @@ scsi_select_sense_cache(bool unchecked_isa_dma)
> >       return unchecked_isa_dma ? scsi_sense_isadma_cache : scsi_sense_cache;
> >  }
> >
> > +/*
> > + *Generic helper function to decrement per cpu io counter.
> > + *@per_cpu_counter: The per cpu counter array. Current cpu counter will be
> > + * decremented
> > + */
> > +
> > +static inline void dec_per_cpu_io_counter(atomic64_t __percpu *per_cpu_counter)
> > +{
> > +     atomic64_t __percpu *io_count = get_cpu_ptr(per_cpu_counter);
> > +
> > +     if (unlikely(abs(atomic64_dec_return(io_count)) >
> > +                             MAX_PER_CPU_COUNTER_ABSOLUTE_VAL))
> > +             scsi_rebalance_per_cpu_io_counters(per_cpu_counter, io_count);
> > +     put_cpu_ptr(per_cpu_counter);
> > +}
> > +/*
> > + *Generic helper function to increment per cpu io counter.
> > + *@per_cpu_counter: The per cpu counter array. Current cpu counter will be
> > + * incremented
> > + */
> > +static inline void inc_per_cpu_io_counter(atomic64_t __percpu *per_cpu_counter)
> > +{
> > +     atomic64_t __percpu *io_count = get_cpu_ptr(per_cpu_counter);
> > +
> > +     if (unlikely(abs(atomic64_inc_return(io_count)) >
> > +                             MAX_PER_CPU_COUNTER_ABSOLUTE_VAL))
> > +             scsi_rebalance_per_cpu_io_counters(per_cpu_counter, io_count);
> > +     put_cpu_ptr(per_cpu_counter);
> > +}
> > +
> > +
> > +/**
> > + * scsi_device_busy - Return the device_busy counter
> > + * @sdev:    Pointer to scsi_device to get busy counter.
> > + **/
> > +int scsi_device_busy(struct scsi_device *sdev)
> > +{
> > +     long long total = 0;
> > +     int i;
> > +
> > +     if (USE_DEVICE_BUSY(sdev))
>
> As Ewan and Bart commented, you can't use the NONROT queue flag simply
> in IO path, given it may be changed somewhere.
>

I added the NONROT check just as an afterthought. This patch is
designed for high end controllers, and most of them have some storage
IO size limit. Also, for HDD sequential IO is almost always large and
touch the controller max IO size limit. Thus, I am not sure merge
matters for these kind of controllers. Database use REDO log and small
sequential IO, but those are targeted to SSDs, where latency and IOPs
are far more important than IO merging.
Anyway, this patch is opt-in for drivers, so any LLD that cannot take
advantage of the flag need not set it, and would work as-is.
I can provide a new version of the patch with this check removed

> Thanks,
> Ming
>




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux