[LSF/MM TOPIC] Re: Announcement: STEC EnhanceIO SSD caching software for Linux kernel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Since Joe is putting together a testing tree to compare the three caching
things, what do you all think of having a(nother) session about ssd caching at
this year's LSFMM Summit?

[Apologies for hijacking the thread.]
[Adding lsf-pc to the cc list.]

--D

On Fri, Jan 18, 2013 at 12:36:42PM -0600, Jason Warr wrote:
> 
> On 01/18/2013 11:44 AM, Amit Kale wrote:
> >> As much as I dislike Oracle that is one of my primary applications.  I
> >> > am attempting to get one of my customers to setup an Oracle instance
> >> > that is modular in that I can move the storage around to fit a
> >> > particular hardware setup and have a consistent benchmark that they use
> >> > in the real world to gauge performance.  One of them is a debit card
> >> > transaction clearing entity on multi-TB databases so latency REALLY
> >> > matters there.  
> > I am curious as to how SSD latency matters so much in the overall transaction times.
> >
> > We do a lot of performance measurements using SQL database benchmarks. Transaction times vary a lot depending on location of data, complexity of the transaction etc. Typically TPM (transactions per minute) is of primary interest for TPC-C.
> >
> 
> It's not specifically SSD latency.  It's I/O transaction latency that
> matters.  This particular application is very sensitive to that because
> it is literally someone standing at a POS terminal swiping a
> debit/credit card.  You only have a couple of seconds after the PIN is
> entered for the transaction to go through your network, application
> server to authorize against a DB and back to the POS.
> 
> The entire I/O stack on the DB is only a small time-slice of that round
> trip.  Your 99th percentile needs to be under 20ms on the DB storage
> side.  If your worst case DB I/O goes beyond 300ms it is considered an
> outage because the POS transaction fails.  So it obviously takes allot
> of planning and optimization work on the DB itself to get good
> tablespace layout to even get into the realm where you can have that
> predictable of latency with multi-million dollar FC storage frames. 
> 
> One of my goals is to be able to offer this level of I/O service on
> commodity hardware.  Simplify the scope of hardware, reduce the number
> of points of failure, make the systems more portable, reduce or
> eliminate dependence on any specific vendor below the application and
> save money.  Not to mention reduce the number of fingers that can point
> away from themselves saying it is someone elses problem to find fault.
> 
> Allot of the pieces are already out there.  A good block caching target
> is one of the missing pieces to help fill the ever growing canyon
> between non-block device system performance and storage.  What they have
> done with L2ARC and SLOG in ZFS/Solaris is good but it has some serious
> short comings in other areas that DM/MD/LVM do extremely well.
> 
> I appreciate all of the brilliant work all of you guys do and hopefully
> I can contribute a little bit of usefulness to this effort.
> 
> Thank you,
> 
> Jason
> 
> --
> dm-devel mailing list
> dm-devel@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/dm-devel

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel


[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux