Re: and again: broken RAID5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Thu, 10 May 2012 10:07:21 +0200 Lars Schimmer <l.schimmer@xxxxxxxxxxxxx>
wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Hi!
> 
> 
> 
> I just want some tips on howto get two broken raid5 running again.
> 
> It contains of 4 drives, one was thrown out on saturday evening, and
> the second drive threw read/write errors before I could replace the
> first one.
> 
> Now I got all four drives running again, looks like some
> controller/cable problem. Changed them.
> 
> But mdadm tells me, only 2 of 4 are available and could not start raid:
> 
> md2 : inactive sdh2[0](S) sdi2[4](S) sdf2[6](S) sdj2[5](S)
> 4251770144 blocks super 1.2
> 
> md1 : inactive sdh1[0](S) sdi1[4](S) sdf1[6](S) sdj1[5](S)
> 2097147904 blocks super 1.2
> 
> mdadm -E tells me e.g. for md1:
> 
> /dev/sdh1:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : a503cea9:eb613db1:c8909233:fd5415ce
> Name : debian:1  (local to host debian)
> Creation Time : Sat Feb 11 13:39:46 2012
> Raid Level : raid5    Raid Devices : 4
> Avail Dev Size : 1048573952 (500.00 GiB 536.87 GB)
> Array Size : 3145720320 (1500.00 GiB 1610.61 GB)
> Used Dev Size : 1048573440 (500.00 GiB 536.87 GB)
> Data Offset : 2048 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : 710a2926:640bc675:7b5fe308:861d1883
> Update Time : Sun May  6 13:57:22 2012
> Checksum : e387e748 - correct
> Events : 206286
> Layout : left-symmetric
> Chunk Size : 128K
> Device Role : Active device 0
> Array State : AA.. ('A' == active, '.' == missing)
> 
> /dev/sdf1:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : a503cea9:eb613db1:c8909233:fd5415ce
> Name : debian:1  (local to host debian)
> Creation Time : Sat Feb 11 13:39:46 2012
> Raid Level : raid5
> Raid Devices : 4
> Avail Dev Size : 1048573952 (500.00 GiB 536.87 GB)
> Array Size : 3145720320 (1500.00 GiB 1610.61 GB)
> Used Dev Size : 1048573440 (500.00 GiB 536.87 GB)
> Data Offset : 2048 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : f53b097c:9762f2a4:f2af9011:3fa7ced7
> Update Time : Sun May  6 13:38:26 2012
> Checksum : c5f745bd - correct
> Events : 206273
> Layout : left-symmetric
> Chunk Size : 128K
> Device Role : Active device 2
> Array State : AAA. ('A' == active, '.' == missing)
> 
> /dev/sdi1:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : a503cea9:eb613db1:c8909233:fd5415ce
> Name : debian:1  (local to host debian)
> Creation Time : Sat Feb 11 13:39:46 2012
> Raid Level : raid5
> Raid Devices : 4
> Avail Dev Size : 1048573952 (500.00 GiB 536.87 GB)
> Array Size : 3145720320 (1500.00 GiB 1610.61 GB)
> Used Dev Size : 1048573440 (500.00 GiB 536.87 GB)
> Data Offset : 2048 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : 79eb5c0c:db73e2f8:21d9b7f8:97936da4
> Update Time : Sun May  6 01:32:35 2012
> Checksum : 5dff4e2a - correct
> Events : 197114
> Layout : left-symmetric
> Chunk Size : 128K
> Device Role : Active device 3
> Array State : AAAA ('A' == active, '.' == missing)
> 
> /dev/sdj1:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : a503cea9:eb613db1:c8909233:fd5415ce
> Name : debian:1  (local to host debian)
> Creation Time : Sat Feb 11 13:39:46 2012
> Raid Level : raid5
> Raid Devices : 4
> Avail Dev Size : 1048573952 (500.00 GiB 536.87 GB)
> Array Size : 3145720320 (1500.00 GiB 1610.61 GB)
> Used Dev Size : 1048573440 (500.00 GiB 536.87 GB)
> Data Offset : 2048 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : e8055bca:0f3c9b54:421bc2c2:4a72359b
> Update Time : Sun May  6 13:57:22 2012
> Checksum : 388b23fc - correct
> Events : 206286
> Layout : left-symmetric
> Chunk Size : 128K
> Device Role : Active device 1
> Array State : AA.. ('A' == active, '.' == missing)
> 
> 
> So all 4 disks see some diff state. Any chance on getting raid5
> running again and read some data from it?
> 
> Does the mdadm -C --assume-clean option help me in that case anyhow?

Just add --force the to --assemble command.

NeilBrown

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.18 (GNU/Linux)

iQIVAwUBT6uIUDnsnt1WYoG5AQLYxxAAvEPEuHXnC4fB5BQJnzVFFrFwwI6K6re2
XmjKr3XnNjluUCbtXgGAi61qti82rYn5ZDKrbYHO47cSJ51E3RQrOOv0v7u3PBx+
v6gAkDUAer68OZWm6Sxxb///3V1joanidEmVdpPntcb3kNz4pLuolGTyKVzmeWwe
u0ggOL9yqIOgCHNAr50RiYP1vsM6QOhXL4f6962ZenQ3vdAqKZiAhZ73fqueeHYi
UqS2B1tHauo0CXCFQrQ2OLw9gtTDp1R46wk/bxZIJMOC/rn9v5u8/GfkJD5nYgNm
rKmDq0Ae+OYpDsVfs9uOV8l9Xwq7Ezg3NvVIK5Dub5rFhMSaNTx+hHsGKvB4DkR/
IX7/HJwHlBQeptPCXbJvdQsbx1YFjyrFIR4HEEPbPndbgr4wNesg2xb6pbDog2Yz
s2Ganv229ZKL0qOTNdFJkZe8uT0VTyQFb+RFUxRzFB3fSdc8vJoBqCwdRYtsYm90
b96rR7hMLsw/5Y0jgj7AGP0Xez07M4KqfWzi4bWGWXSBZarQffgn+ztBUHSfpeHn
quVoNJ/wpHHOxVHpkaKuBDHwsG2pDFrjHberNF2x2lHZeDK0d07ij1KtbVL9564c
Hoh6TzrP89vLQYSqlVSfFGIE1w8c8ZCVfSv6T674g34DZzlU8PLcNUIAPfM1YAKb
pg8vLpp2ZVA=
=TFGJ
-----END PGP SIGNATURE-----
ÿôèº{.nÇ+?·?®?­?+%?Ëÿ±éݶ¥?wÿº{.nÇ+?·¥?{±þ¶¢wø§¶?¡Ü¨}©?²Æ zÚ&j:+v?¨þø¯ù®w¥þ?à2?Þ?¨è­Ú&¢)ß¡«a¶Úÿÿûàz¿äz¹Þ?ú+?ù???Ý¢jÿ?wèþf



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux