Hi, I got a raid1 composed with 2 disks, 4T each At first, I was only using 1T of the each disk, then I grew the array recently with the command mdadm --grow /dev/md1 --size=1951944704K After grow finished, I found that Avail Dev Size become 3903889408 not like 4T as before /dev/sdb3: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : 8d7b8858:e0e93d83:7c87e6e0:bd1628b8 Name : 1 Creation Time : Sun Apr 8 09:54:47 2018 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 3903889408 (1861.52 GiB 1998.79 GB) Array Size : 1951944704 (1861.52 GiB 1998.79 GB) ........ /dev/sda3: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : 8d7b8858:e0e93d83:7c87e6e0:bd1628b8 Name : 1 Creation Time : Sun Apr 8 09:54:47 2018 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 3903889408 (1861.52 GiB 1998.79 GB) Array Size : 1951944704 (1861.52 GiB 1998.79 GB) after tracing the code, this value was modified in mdadm Grow.c Grow_reshape() function /* Update the size of each member device in case * they have been resized. This will never reduce * below the current used-size. The "size" attribute * understands '0' to mean 'max'. */ min_csize = 0; rv = 0; for (mdi = sra->devs; mdi; mdi = mdi->next) { if (sysfs_set_num(sra, mdi, "size", s->size == MAX_SIZE ? 0 : s->size) < 0) { ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ /* Probably kernel refusing to let us * reduce the size - not an error. */ break; } I thought the "size" here in kernel means "Device size", but under this situation, my component device size become smaller than its actual size I was curious that why don't we always set the size to MAX_SIZE? Thanks,