On Mon, Dec 04 2017, Duane wrote: > On 2017-12-03 05:51 PM, NeilBrown wrote: >> On Thu, Nov 30 2017, Duane wrote: >> >>> Why is the data offset set so big? I created a 3x4TB RAID5 array and the >>> data offset was 128MB. Chunk size was the default 512kB. >> It is less than 0.1% of the device... >> >>> I cannot see why such a large offset is used. I would think the data >>> offset need only be at most the chunk size plus the space (1 sector) for >>> the superblock and bitmap. >> It is insurance. If you want to change the chunksize later, having a >> lot of head-room will allow the reshape to go much faster. >> >>> When reshaping the array, I am prompted to use an external file, so I >>> don't see that mdadm ever uses the space. >> Citation needed.... what version of mdadm, what kernel? What reshape >> command? > kernel: 9.64-1-lts I don't know what that means? Maybe 4.9.64-1-lts. That's nice and recent. > mdadm: mdadm - v4.0 - 2017-01-09 > action: reduce the number of raid devices Ahh. Reducing the number of devices doesn't use the head-space, it used the end-space. As you reduce the size of the array when doing this, there is always lots of end-space. So I'm surprised that it would want a backup file. However without specifics (mdadm -E of devices before the reshape, and exact command given) I won't be looking into why it might. Thanks, NeilBrown >>> >>> I tried making some test arrays and got much smaller sizes. A 3x1GB >>> RAID5 array with 64k chunks had a 1MB data offset. >>> >>> >>> If I make a 7x4TB RAID5 array with 64kB chunks, is there a problem with >>> setting the data offset to around 2MB? >> Only that it might reduce your options in the future, though probably >> not by much. >> >> NeilBrown
Attachment:
signature.asc
Description: PGP signature