Re: Integration of SCST in the mainstream Linux kernel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2008-01-31 at 18:40 +0100, Bart Van Assche wrote:
> On Jan 31, 2008 6:14 PM, Nicholas A. Bellinger <nab@xxxxxxxxxxxxxxx> wrote:
> > Also, with STGT being a pretty new design which has not undergone alot
> > of optimization, perhaps profiling both pieces of code against similar
> > tests would give us a better idea of where userspace bottlenecks reside.
> > Also, the overhead involved with traditional iSCSI for bulk IO from
> > kernel / userspace would also be a key concern for a much larger set of
> > users, as iSER and SRP on IB is a pretty small userbase and will
> > probably remain small for the near future.
> 
> Two important trends in data center technology are server
> consolidation and storage consolidation. A.o. every web hosting
> company can profit from a fast storage solution. I wouldn't call this
> a small user base.
> 
> Regarding iSER and SRP on IB: InfiniBand is today the most economic
> solution for a fast storage network. I do not know which technology
> will be the most popular for storage consolidation within a few years
> -- this can be SRP, iSER, IPoIB + SDP, FCoE (Fibre Channel over
> Ethernet) or maybe yet another technology. No matter which technology
> becomes the most popular for storage applications, there will be a
> need for high-performance storage software.
> 

I meant small referring to storage on IB fabrics which has usually been
in the research and national lab settings, with some other vendors
offering IB as an alternative storage fabric for those who [w,c]ould not
wait for 10 Gb/sec copper Ethernet and Direct Data Placement to come
online.  These types of numbers compared to say traditional iSCSI, that
is getting used all over the place these days in areas I won't bother
listing here.

As for the future, I am obviously cheering for IP storage fabrics, in
particular 10 Gb/sec Ethernet and Direct Data Placement in concert with
iSCSI Extentions for RDMA to give the data center a high performance,
low latency transport that can do OS independent storage multiplexing
and recovery across multiple independently developed implementions.
Also avoiding lock-in from un-interoptable storage transports (espically
on the high end) that had plauged so many vendors in years past has
become an real option in the past few years with a IETF defined block
level storage protocol.  We are actually going on four years since
RFC-3720 was ratified. (April 2004)

Making the 'enterprise' ethernet switching equipment go from millisecond
to nanosecond latency in a whole different story that goes beyond my
area of expertise.  I know there is one startup (Fulcrum Micro) who is
working on this problem and seems to be making some good progress.

--nab

> References:
> * Michael Feldman, Battle of the Network Fabrics, HPCwire, December
> 2006, http://www.hpcwire.com/hpc/1145060.html
> * NetApp, Reducing Data Center Power Consumption Through Efficient
> Storage, February 2007,
> http://www.netapp.com/ftp/wp-reducing-datacenter-power-consumption.pdf
> 
> Bart Van Assche.
> 

-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux