On Fri, 23 Nov 2012 14:06:09 +0000 "Tomczak, Marcin" <marcin.tomczak@xxxxxxxxx> wrote: > Hi. There is a problem when I try to resize raid0 by adding disk to raid > volume. I have checked the problem on mdadm v3.2.3 and on a code from > neil_master branch. > > OS: Red Hat 6.3 GA > reproduction: always > system disk: sda > other disks: sdb, sdc, sdd > > > I. mdadm version: v3.2.3 > > # mdadm -CR /dev/md/vol metadata=1.2 -l 0 --size=4G -n 2 /dev/sdb /dev/sdc > # mdadm -G /dev/md/vol -n 3 --add /dev/sdd > mdadm: /dev/md/vol: could not set level to raid4 > > > II. mdadm version: branch neil_master > > SHA1 ID: 30d48159710996be7770bfbfbddc826317b561aa > Monitor: don't complain about non-monitorable arrays in mdadm.conf > Author: NeilBrown <neilb@xxxxxxx> 2012-10-24 04:09:09 > Committer: NeilBrown <neilb@xxxxxxx> 2012-10-24 04:09:09 > > # mdadm -CR /dev/md127 --metadata=1.2 -l 0 --size=15G -n 2 /dev/sdb /dev/sdc --size isn't honoured for RAID0 (or Linear). I should probably do something about that - generate a warning at least. > # mdadm -G /dev/md127 -n 3 --add /dev/sdd > mdadm: level of /dev/md127 changed to raid4 > mdadm: added /dev/sdd > mdadm: this change will reduce the size of the array. > use --grow --array-size first to truncate array. > e.g. mdadm --grow /dev/md127 --array-size 47185920 This happens because the metadata has a size recorded in it, but it is being ignored, which results in confusions. > # cat /proc/mdstat > Personalities : [raid0] [raid6] [raid5] [raid4] > md127 : active raid4 sdd[3] sdc[1] sdb[0] > 78107648 blocks super 1.2 level 4, 512k chunk, algorithm 5 [3/3] [UUU] > > > What do you think about this mdadm behaviour? Not ideal, but not a big problem if you avoid trying to set --size for RAID0. NeilBrown
Attachment:
signature.asc
Description: PGP signature