Re: very large data-offset?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2017-12-03 05:51 PM, NeilBrown wrote:
On Thu, Nov 30 2017, Duane wrote:

Why is the data offset set so big? I created a 3x4TB RAID5 array and the
data offset was 128MB. Chunk size was the default 512kB.
It is less than 0.1% of the device...

I cannot see why such a large offset is used. I would think the data
offset need only be at most the chunk size plus the space (1 sector) for
the superblock and bitmap.
It is insurance.  If you want to change the chunksize later, having a
lot of head-room will allow the reshape to go much faster.

When reshaping the array, I am prompted to use an external file, so I
don't see that mdadm ever uses the space.
Citation needed.... what version of mdadm, what kernel?  What reshape
command?
kernel:  9.64-1-lts
mdadm:  mdadm - v4.0 - 2017-01-09
action:  reduce the number of raid devices

I tried making some test arrays and got much smaller sizes. A 3x1GB
RAID5 array with 64k chunks had a 1MB data offset.


If I make a 7x4TB RAID5 array with 64kB chunks, is there a problem with
setting the data offset to around 2MB?
Only that it might reduce your options in the future, though probably
not by much.

NeilBrown

null

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux