SSD journal deployment experiences

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 9 Sep 2014 10:57:26 -0700 Craig Lewis wrote:

> On Sat, Sep 6, 2014 at 9:27 AM, Christian Balzer <chibi at gol.com> wrote:
> 
> > On Sat, 06 Sep 2014 16:06:56 +0000 Scott Laird wrote:
> >
> > > Backing up slightly, have you considered RAID 5 over your SSDs?
> > >  Practically speaking, there's no performance downside to RAID 5 when
> > > your devices aren't IOPS-bound.
> > >
> >
> > Well...
> > For starters with RAID5 you would loose 25% throughput in both Dan's
> > and my case (4 SSDs) compared to JBOD SSD journals.
> > In Dan's case that might not matter due to other bottlenecks, in my
> > case it certainly would.
> >
> 
> It's a trade off between lower performance all the time, or much lower
> performance while you're backfilling those OSDs.  To me, this seems like
> a somewhat reasonable idea for a small cluster, where losing one SSD
> could lose >5% of the OSDs.  It doesn't seem worth the effort for a large
> cluster, where losing one SSD would lose < 1% of the OSDs.
> 
A good point, but for example in my case (4x DC3700s 100GB with 2 journals
each in front of 8 HDDs) the SSDs are already the limiting factor (and one
I willingly accept). 
Lowering that by another 25% just doesn't feel worth it, given the
reliability/durability of the Intel SSDs.

> 
> >
> > And while you're quite correct when it comes to IOPS, doing RAID5 will
> > either consume significant CPU resource in a software RAID case or
> > require a decent HW RAID controller.
> >
> > Christian
> 
> 
>  I haven't worried about CPU with software RAID5 in a very long time...
> maybe Pentium 4 days?  It's so rare to actually have 0% Idle CPU, even
> under high loads.
>
True in most cases indeed...
 
> Most of my RAID5 is ZFS, but the CPU hasn't been the limiting factor on
> my database or NFS servers.  I'm even doing software crypto, without CPU
> support, with only a 10% performance penalty.  If the CPU has AES
> support, crypto is free.  Obviously, RAID0 (or fully parallel JBOD) will
> be faster than RAID5, but RAID5 is faster than RAID10 for all but the
> most heavily read biased workloads.  Surprised the hell out of me.  I'll
> be converting all of my database servers from RAID10 to RAIDZ.  Of
> course, benchmarks that match your workload trump some random yahoo on
> the internet.  :-)
> 
RAID5 (I won't deploy any below RAID6 with more than 4 drives anymore,
FWIW) will indeed be faster than an equally sized RAID10, given that is
more data disks to play with. However that speed (bandwidth) does not
necessarily translate to IOPS, especially with a software RAID as opposed
to a HW RAID with a large HW cache.

> 
> Ceph OSD nodes are a bit different though.  They're one of the few beasts
> I've dealt with that are CPU, Disk, and network bound all at the same
> time. If you have some idle CPU during a big backfill, then I'd consider
> Software RAID5 a possibility.  If you ever sustain 0% idle, then I
> wouldn't try it.

Precisely and the reason I mentioned it in this context.

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi at gol.com   	Global OnLine Japan/Fusion Communications
http://www.gol.com/


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux