Re: TCMU buffer sizing (was Re: [PATCH 3/4] target/user: Introduce data_bitmap, replace data_length/data_head/data_tail)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2016-03-02 at 17:43 -0800, Sheng Yang wrote:
> On Sat, Feb 27, 2016 at 4:16 PM, Nicholas A. Bellinger
> <nab@xxxxxxxxxxxxxxx> wrote:
> > On Fri, 2016-02-26 at 12:33 -0800, Sheng Yang wrote:
> >> On Fri, Feb 26, 2016 at 12:00 PM, Andy Grover <agrover@xxxxxxxxxx> wrote:
> >> > On 02/26/2016 11:43 AM, Sheng Yang wrote:

<SNIP>

> >> I think in that case we don't want to handle 1MB request. We can limit
> >> the max sectors one command can handle to e.g. 128 sectors/64kb, then
> >> 128 commands in queue, so that's 8M for data ring, which sound pretty
> >> reasonable.
> >>
> >> The problem is I don't know how to limit it on TCMU. I can only find a
> >> way to do it in tcm loopback device.
> >>
> >
> > So a backend driver declares it's supported max_sectors per I/O using
> > dev->dev_attrib.hw_max_sectors, which is reported back to the initiator
> > via block limits (VPD=0xb0) in MAXIMUM TRANSFER LENGTH.
> >
> > However, not all initiator hosts honor MAXIMUM TRANSFER LENGTH, and
> > these hosts will continue to send I/Os larger than hw_max_sectors.
> >
> > The work-around we settled on last year to address this (at fabric
> > driver level) for qla2xxx was to include a new TFO->max_data_sg_nents,
> > which declares the HW fabric limit for maximum I/O size here:
> >
> > target/qla2xxx: Honor max_data_sg_nents I/O transfer limit
> > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=8f9b565
> >
> > When an I/O is received that is larger than the fabric HW limit, the
> > residual_count is set for the payload that could not be transfered, so
> > the host will reissue the remainder upon completion in a new I/O.
> >
> > The same logic for generating a residual_count for I/O's exceeding a
> > particular backend's hw_max_sectors could be easily added in
> > target_core_transport.c:target_check_max_data_sg_nents().
> >
> > Something akin to:

<SNIP>

> Thank you for explanation!
> 
> This is definitely the way to go for max_sectors part.
> 

So thinking about this more, I don't think returning residuals for all
backends based on hw_max_sectors (by default) makes sense with recent
block code changes.

That is, in v4.x kernels iblock backends are able to submit arbitrary
bio sizes, and block will handle the split for us.

Perhaps a better approach would be to allow drivers like TCMU + FILEIO
to enforce their strict hw_max_sectors using residuals, but still allow
IBLOCK to process larger ones.

> However the queue depth part still confused me. Not sure if
> dev->hw_queue_depth was honored.

Once upon a time LIO used to enforce hw_queue_depth values internally,
but with modern code these are purely informational values.

--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux