On 05/04/2016 16:28, Phil Turmel wrote:
If your array has write-intent bitmaps, use --re-add instead of --add.
It'll be quick. Otherwise just --add and let it rebuild.
Phil, thanks for the advice.
I hit an unexpected problem fixing the partition table on /dev/sdb,
the disk that dropped from the Raid1 array. The problem is caused by
/dev/sdb being *smaller* than /dev/sdc (the working array member) -
despite the disks being identical products from WD. gdisk complains
that partition 5 (/dev/sdb5), which is to be the Raid1 partner for the
LVM containing all our backed up files, is too big (together with the
other partitions) for the /dev/sdb disk.
Presumably, raid1 doesn't work if an 'add'ed disk partition is smaller
than the existing, running, degraded array? Am I right in thinking
that the LVM won't be able to be carried securely on the underlying md
system? lsdrv is reporting that /dev/md127 has 0 free, so it seems
that the LVM is occupying the complete space of /dev/md127, and it
must be using the complete space of the underlying /dev/sdc5 because
only sdc is active, at the moment (the Raid1 being still degraded).
To protect the LVM, what would be a good thing to do? Should I define
a slightly shorter 'partner' partition on the failed disk (/dev/sdb) -
I would think not, but I would welcome advice.
I did think about reducing the size of one of the other partitions on
/dev/sdb - there's a swap partition of 2G which could become 1.5G,
because there's another 2G on the working disk anyway. Doing that,
the partner partitions for the real data could be the same size,
though not in exactly the same place on both disks. I think this
might work?
regards, Ron
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html