Re: mdadm: Patch to restrict --size when shrinking unless forced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Oct 09 2017, Wakko Warner wrote:

> Phil Turmel wrote:
>> On 10/09/2017 12:10 AM, NeilBrown wrote:
>> 
>> > If there is some action that mdadm can currently be told to perform, and
>> > when it tries to perform that action it corrupts the array, then
>> > it is certainly appropriate to teach mdadm not to perform that action.
>> > It shouldn't even perform that action with --force.   I agree that
>> > changing mdadm like this is complementary to changing the kernel.  Both
>> > are useful.
>> 
>> A certain amount of the trouble with all of this is the english meaning
>> of "grow" doesn't really match what mdadm allows.
>> 
>> Might it be reasonable to reject "--grow" operations that reduce the
>> final array size, and introduce the complementary "--reduce" operation
>> that rejects array size increases?
>> 
>> Both operations would share the current code, just apply a different
>> sanity check before proceeding.
>> 
>> mdadm would then at least not violate the rule of least surprise.
>
> As a general user of md raid and as a reader of the list, I would agree that
> this would be a better solution.  Thinking in terms of lvm, there's lvreduce
> and lvextend.  IMO, --force wouldn't be needed for --reduce (I was orginally
> thinking of --shrink)
>
> On a side note, is it possible for the lower layers to know what the last
> used sector is?  IE lvm ontop of raid and has 10% allocated and the last
> sector is around the 10% mark.  (If this were possible --force would be,
> required if shrinking would result in inaccessible data)

No it isn't.  I've occasionally thought of adding functionality so that
the a device could ask its client (e.g. filesystem, lvm, etc) if
shrinking is OK - but it hasn't happened yet.

>
> I recently did a shrink of 4x 2tb drives so that I could replace the 2tb
> drives with 80gb drives (yes, big shrink!)  Would have been nice for mdadm
> to know the smallest size was that wouldn't destroy my lvm volumes that were
> on top.

Guess, try, see if data is still accessible.  If not, revert the change.
If you have a filesystem on the raid, fsck will complain if you made it
too small.  I don't know what you would try with lvm.  pvscan?

NeilBrown

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux