I may add to that that i have expanded linux filesystems (xfs and
ext4) both via LVM and some by adding disks to a hardware raid.
from the OS point of view it does not make a difference, the
procedure once the block device on which the filesytem resides is
expanded is prettymuch the same and so far always worked like a
charm. one word of caution though: i've just recently had a case with a
raid 6 across 12 disks (1TB, a 5 year old RAID array) where during
a planned power outage a disk failed, when turnging the storage
back on, a second failed right after that and the third failed
during rebuild. luckily this was a retired server used for backup
only, so no harm done.. but this just shows us, that under the
"ritght" circumstances, multi disk failures are possible. the more
disks you have in your raidset the higher the chance of a disk
failure.. by doubling the amount of disks in your raidset you
double the chance of a disk failure and therefore a double or
tripple disk failure as well. long story short.. i'd consider creating a second raid acorss
your 12 new disks and adding this as a second brick to gluster
storage.. that's what gluster's for after all .. to scale your
storage :) in the case of raid 6 you will loose the capacity of
two disks but you will gain alot in terms of redundancy and
dataprotection. also you will not have the performance impact of the raid
expansion.. this is usually a rather long process which will eat a
lot of your performance while it's ongoing. of course, if you have mirrored bricks, that's a different story,
but i assume you don't. cheers
Pascal
On 26.04.19 05:35, Jim Kinney wrote:
I've expanded bricks using lvm and there was no problems at all with gluster seeing the change. The expansion was performed basically simultaneously on both existing bricks of a replica. I would expect the raid expansion to behave similarly. |
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users