Re: [PATCH V3 0/5] dm-rq: improve sequential I/O performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 12 2018 at  2:53pm -0500,
Elliott, Robert (Persistent Memory) <elliott@xxxxxxx> wrote:

> 
> 
> > -----Original Message-----
> > From: linux-block-owner@xxxxxxxxxxxxxxx [mailto:linux-block-
> > owner@xxxxxxxxxxxxxxx] On Behalf Of Bart Van Assche
> ...
> > The intention of commit 6077c2d706097c0 was to address the last mentioned
> > case. It may be possible to move the delayed queue rerun from the
> > dm_queue_rq() into dm_requeue_original_request(). But I think it would be
> > wrong to rerun the queue immediately in case a SCSI target system returns
> > "BUSY".
> 
> Seeing SCSI BUSY mentioned here...
> 
> On its own, patch 6077c2d706 seems to be adding an arbitrarily selected
> magic value of 100 ms without explanation in the patch description or
> in the added code.

It was 50 ms before it was 100 ms.  No real explaination for these
values other than they seem to make Bart's IB SRP testbed happy?
 
> But, dm_request_original_request() already seems to have chosen that value
> for similar purposes:
>         unsigned long delay_ms = delay_requeue ? 100 : 0;
> 
> So, making them share a #define would indicate there's a reason for that
> particular value.  Any change to the value would be picked up everywhere.
> 
> All the other callers of blk_mq_delay_run_hw_queue() use macros:
> drivers/md/dm-rq.c:             blk_mq_delay_run_hw_queue(hctx, 100/*ms*/);
> drivers/nvme/host/fc.c:         blk_mq_delay_run_hw_queue(queue->hctx, NVMEFC_QUEUE_DELAY);
> drivers/scsi/scsi_lib.c:                blk_mq_delay_run_hw_queue(hctx, SCSI_QUEUE_DELAY);
> drivers/scsi/scsi_lib.c:                        blk_mq_delay_run_hw_queue(hctx, SCSI_QUEUE_DELAY);

Sure, I'll add a #define.

> Those both happen to be set to 3 (ms).

As for the value of 100, we're dealing with path faults so retrying them
extremely fast could be wasted effort.  But there is obviously no once
size fits all.  As storage gets faster 100 ms could prove to be very
bad.

But without tests to verify a change, there won't be one.

Thanks,
Mike



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux