Re: Assume-clean for md grow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Chris Webb <chris@xxxxxxxxxxxx> writes:

> NeilBrown <neilb@xxxxxxx> writes:
> 
> > Or if you want to be slightly more subtle, remove the
> > 	if (mddev->pers)
> > 		return -EBUSY;
> > from resync_start_store.  Then before a grow that you want to be
> > --assume-clean, write into /sys/block/mdXX/md/resync_start the
> > number of sectors in the final raid1 array.
> 
> Hi Neil. Thanks for this suggestion: it looks fine for what we're looking to
> achieve. Interestingly, this will mean writing a smaller value that shown by
> a read from this file for an in-sync array, but will still work?

For instance, 

  3# cat /sys/block/md127/{size,md/resync_start}
  2097136
  18446744073709551615

I just tried growing the slots up from 1G to 3G, then

  echo $((2097136 + 2*1024*1024*2)) >/sys/block/md127/md/resync_start
  mdadm --grow 3145720

but this gives me

  md127 : active raid1 dm-3[1] dm-2[0]
        3145720 blocks super 1.1 [2/2] [UU]
          resync=PENDING

in /proc/mdstat, which is presumably not right?

Does this sysfs file hold the size of the array or the size of the
components, for implementing the non-RAID1 case?

Also, if I don't know the size of the final array[1], is it safe to write a
value much larger than the size of the array in here, or will that cause
future grows to be clean when this isn't necessarily intended?

[1] One of the things which has been most awkward with using md as part of a
automated storage system has been going from component size to available
array size and back again, given that bitmap reservation depends on the
original size of the array not the current size of the array. (In the end,
we've cheated and always written everything in terms of change in component
size vs change in array size, and been generous with the amount of space we
allocate to components on initial device create. It does feel like I'm
coding far too much knowledge of the internal choices mdadm makes into my
management layer, though!)

Cheers,

Chris.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux