Re: best base / worst case RAID 5,6 write speeds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks guys for all the ideas and help.

Phil,

> Very interesting indeed. I wonder if the extra I/O in flight at high
> depths is consuming all available stripe cache space, possibly not
> consistently. I'd raise and lower that in various combinations with
> various combinations of iodepth.  Running out of stripe cache will cause
> premature RMWs.

Okay, I'll play with that today.  I have to confess I'm not sure that
I completely understand how the stripe cache works.  I think the idea
is to batch I/Os into a complete stripe if possible and write out to
the disks all in one go to avoid RMWs.  Other than alignment issues,
I'm unclear on what triggers RMWs.  It seems like as Robert mentioned
that if the I/Os block size is stripe aligned, there should never be
RMWs.

My stripe cache is 8192 btw.

John,

> I suspect you've hit a known problem-ish area with Linux disk io, which is that big queue depths aren't optimal.

Yes, certainly looks that way.  But maybe as Phil indicated I might be
exceeding my stripe cache.  I am still surprised that there are so
many RMWs even if the stripe cache has been exhausted.

> As you can see, it peaks at a queue depth of 4, and then tends
> downward before falling off a cliff.  So now what I'd do is keep the
> queue depth at 4, but vary the block size and other parameters and see
> how things change there.

Why do you think there is a gradual drop off after queue depth of 4
and before it falls off the cliff?

> Now this is all fun, but I also think you need to backup and re-think
> about the big picture.  What workloads are you looking to optimize
> for?  Lots of small file writes?  Lots of big file writes?  Random
> reads of big/small files?

> Are you looking for backing stores for VMs?

I with this were for fun! ;)  Although this has been a fun discussion.
I've learned a ton.  This effort is for work though.  I'd be all over
the SSDs and caching otherwise.  I'm trying to characterize and then
squeeze all of the performance I can out of a legacy NAS product.  I
am constrained by the existing hardware.  Unfortunately I do not have
the option of using SSDs or hardware RAID controllers.  I have to rely
completely on Linux RAID.

I also need to optimize for large sequential writes (streaming video,
audio, large file transfers), iSCSI (mostly used for hosting VMs), and
random I/O (small and big files) as you would expect with a NAS.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux