Re: [MINI SUMMIT] SCSI core performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2012-07-17 at 19:39 -0700, Nicholas A. Bellinger wrote:
> Hi KS-PCs,
> 
> I'd like to propose a SCSI performance mini-summit to see how interested
> folks are in helping address the long-term issues that SCSI core is
> currently facing wrt to multi-lun per host and heavy small block random
> I/O workloads.
> 
> I know this would probably be better suited for LSF (for the record it
> was proposed this year) but now that we've acknowledge there is a
> problem with SCSI LLDs vs. raw block drivers vs. other SCSI subsystems,
> it would be useful to get the storage folks into a single room at some
> point during KS/LPC to figure out what is actually going on with SCSI
> core.

You seem to have a short memory:  The last time it was discussed

http://marc.info/?t=134155373900003

It rapidly became apparent there isn't a problem.  Enabling high IOPS in
the SCSI stack is what I think you mean.

> As mentioned in the recent tcm_vhost thread, there are a number of cases
> where drivers/target/ code can demonstrate this limitation pretty
> vividly now.
> 
> This includes the following scenarios using raw block flash export with
> target_core_mod + target_core_iblock export and the same small block
> (4k) mixed random I/O workload with fio:
> 
> *) tcm_loop local SCSI LLD performance is an order of magnitude slower 
>    than the same local raw block flash backend.
> *) tcm_qla2xxx performs better using MSFT Server hosts than Linux v3.x
>    based hosts using 2x socket Nehalem hardware w/ PCI-e Gen2 HBAs
> *) ib_srpt performs better using MSFT Server host than RHEL 6.x .32 
>    based hosts using 2x socket Romley hardware w/ PCI-e Gen3 HCAs
> *) Raw block IBLOCK export into KVM guest v3.5-rc w/ virtio-scsi is 
>    behind in performance vs. raw local block flash.  (cmwq on the host 
>    is helping here, but still need to with MSFT SCSI mini-port)
> 
> Also, with 1M IOPs into a single VM guest now being done by other non
> Linux based hypervisors, the virtualization bit for high performance KVM
> SCSI based storage is quickly coming on..
> 
> So all of that said, I'd like to at least have a discussion with the key
> SCSI + block folks who will be present in San Diego on path forward to
> address these without having to wait until LSF-2013 + hope for a topic
> slot to materialize then.
> 
> Thank you for your consideration,

Well, your proposal is devoid of an actual proposal.

Enabling high IOPS involves reducing locking overhead and path length
through the code.  I think most of the low hanging fruit in this area is
already picked, but if you have an idea, please say.  There might be
something we can extract from the lockless queue work Jens is doing, but
we need that to materialise first.

Without a concrete thing to discuss, shooting the breeze on high IOPS in
the SCSI stack is about as useful as discussing what happened in last
night's episode of Coronation Street which, when it happens in my house,
always helps me see how incredibly urgent fixing the leaky tap I've been
putting off for months actually is.

If someone can come up with a proposal ... or even perhaps another path
trace showing where the reducible overhead and lock problems are we can
discuss it on the list and we might have a real topic by the time LSF
rolls around.

James


--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux