Re: Resize Raid5 devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Am 19.06.2009 um 03:58 schrieb Neil Brown:
There is an internal write intent bitmap at the array:
DatenGrab:/media # cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1]
md3 : active raid5 sdf1[3] sdi1[0] sdj1[4] sdh1[1]
     3457099008 blocks super 1.2 level 5, 256k chunk, algorithm 2
[4/4] [UUUU]
     bitmap: 0/275 pages [0KB], 2048KB chunk

Do you have an idea why this bitmap has been ignored?

Probably a kernel bug.  There have been a couple of fixes relating to
this since 2.6.25.  Hopefully it is all working in 2.6.30...

The bug that you hit was probably the one fixed by
 commit a0da84f35b25875870270d16b6eccda4884d61a7
which is in 2.6.27.

Mh ... ok. So it may be a good idea to update to OpenSuSE 11.1 before
growing the other devices. Will do that at the weekend.

Alternatively stop the raid completly, resize all the partitions and
start the raid again. No need to fail/readd each disk in turn.

good idea ...

And when you re-assemble the array, use --update=devicesize.  That
will ensure that md sees all of the new space on the devices.

This hint may have saved me a lot of time.

Thanks a lot
Ralf

--
Van Roy's Law: -------------------------------------------------------
       An unbreakable toy is useful for breaking other toys.



--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux