Hello, # mdadm --detail /dev/md1 /dev/md1: Version : 1.2 Creation Time : Fri May 27 09:50:54 2011 Raid Level : raid1 Array Size : 488372863 (465.75 GiB 500.09 GB) Used Dev Size : 488372863 (465.75 GiB 500.09 GB) Raid Devices : 2 Total Devices : 3 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Wed Sep 12 10:54:33 2012 State : active Active Devices : 2 Working Devices : 3 Failed Devices : 0 Spare Devices : 1 Name : avdeb:1 (local to host avdeb) UUID : e29a222d:e6245302:5ff3f834:ad471a01 Events : 26 Number Major Minor RaidDevice State 0 8 82 0 active sync /dev/sdf2 1 8 34 1 active sync /dev/sdc2 2 8 50 - spare /dev/sdd2 # mdadm --grow /dev/md1 --chunk=64K --level=5 --raid-devices=3 mdadm: New chunk size does not divide component size ----- Shouldn't mdadm be able to figure out a way to somehow proceed in this case? :) So what if it does not divide, I am increasing the array size by 33%, it has plenty of new space, why not leave a bit of it in the end of all devices so the chunk size does divide... Also I heard of cases (on #linux-raid IRC, I think) when people reshaped like this without specifying the chunk size explicitly, and ended up with something like a 4K chunk, which is certainly less than optimal. -- With respect, Roman ~~~~~~~~~~~~~~~~~~~~~~~~~~~ "Stallman had a printer, with code he could not see. So he began to tinker, and set the software free."
Attachment:
signature.asc
Description: PGP signature