Re: best base / worst case RAID 5,6 write speeds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dallas,

I suspect you've hit a known problem-ish area with Linux disk io,
which is that big queue depths aren't optimal.  They're better for
systems where you're talking to a big backend disk array with lots of
cache/memory which is all battery backed, and which can acknowledge
those writes immmediately, but then retire them to disk in a more
optimal manner.

So using a queue depth of 4, which is per-device, means that you can
have upto 48 writes outstanding at a time.  Just doubling that to 8,
means you can have 96 writes outstanding, which takes up memory
buffers on the system, etc.

As you can see, it peaks at a queue depth of 4, and then tends
downward before falling off a cliff.  So now what I'd do is keep the
queue depth at 4, but vary the block size and other parameters and see
how things change there.

Now this is all fun, but I also think you need to backup and re-think
about the big picture.  What workloads are you looking to optimize
for?  Lots of small file writes?  Lots of big file writes?  Random
reads of big/small files?

Are you looking for backing stores for VMs?

Have you looked into battery backed RAID cards?  They used to be alot
common, but these days CPUs are more than fast enough, and JBOD works
really well with more flexibility and less chance of your data getting
lost due to vendor-lockin.

Another option, if you're looking for performance might be using
lvmcache with a pair of mirrored SSDs, and if you KNOW you have UPS
support on the system, you could change the cache policy from
writeback (both SSDs and backing store writes need to complete) to
write through (SSDs writes done, backing store later...) so that you
get the most speed.

I've just recently done this setup on my home machine (not nearly as
beefy as this) and my off the cuff feeling is that it's a nice
speedup.

But back to the task at hand, what is the goal here?  To find the
sweet spot for your hardware?  For fun?  I'm all up for fun, this is a
great discussion.

It's too bad there's no auto-tuning script for testing a setup and
running fio to get test results, and then having the next knob tweaked
and tested in an automated way.


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux