On 13/01/18 03:35, Wol's lists wrote:
I'll get round to writing all this up soon, so the wiki will try and
persuade people that resizing arrays is not actually the brightest of
ideas.
Now hang on. Don't go tarring every use case with the same brush.
There are many use cases for a bucket of disks and high performance is
but one of them.
Leaving aside XFS, let's look at EXT3/4 as they seem to be generally the
most common filesystems in use for your average "install it and run it"
user (ie *ME*).
If you read the mke2fs man page and check out stripe and stride (which
you *used* to have to specify manually), both of them imply they are
important for letting the filesystem know the construction of your RAID
for *performance* reasons.
Nowhere does *anything* make any mention of changing geometry, and if
you gave a 10 second thought to those parameters and their explanations
you'd have to think "This filesystem was optimised for the RAID geometry
it was built with. If I change that, then I won't have the same
performance I did have at the time of creation". Or maybe that was only
obvious to me.
Anyway, I happily grew several large arrays over the years *knowing*
that there would be a performance impact, because for my use case I
didn't actually care.
"Enterprise" don't grow arrays. They build a storage solution that is
often extremely finely tuned for exactly their workload and they use it.
If they need more storage they either replicate or build another (with
the consequential months of tests/tuning) storage configuration. I see
Stan Hoeppner replied. If you want a good read, get him going on
workload specific XFS tuning.
It's only hacks like me that tack disks onto built arrays, but I did it
*knowing* it wasn't going to affect my workload as all I wanted was a
huge bucket of storage with quick reads. Writes don't happen often
enough to matter.
Exposing the geometry to the filesystem is there to give the filesystem
a chance of performing operations in a manner least likely to create a
performance hotspot (as pointed out by Dave Chinner). They are hints.
Change the geometry after the fact and all bets are off.
On another note, personally I've used XFS in a couple of performance
sensitive roles over the years (when it *really* mattered), but as I
don't often wade into that end of the pool I tend to stick with the ext
series.
e2fsck has gotten me out of some really tight spots and I can rely on it
making the best of a really bad mess. With XFS I've never had the
pleasure of running it on anything other than top of the line hardware,
so it never had to clean up after me. It does go like a stung cat though
when it's tuned up.
If I were to suggest an addition to the RAID wiki, it'd be to elaborate
on the *creation* time tuning a filesystem create tool does with the
RAID geometry, and to point out that once you grow the RAID, all
performance bets are off. I've never met a filesystem that would break
however.
I've grown RAID 1, 5 & 6. Growing RAID10 with anything other than a near
configuration and adding another set of disks just feels like a disaster
waiting to happen. Even I'm not that game.
I do have a staging machine now with a few spare disks, so I might have
a crack at it, but I won't be using a kernel and userspace as old as the
thread initiator.
Regards,
Brad
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html