Re: Re : Re: Big trouble during reassemble a Raid5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

It's my fault too, because the mailing list doesn't accept HTML format.
And my provider doesn't offer plain text option with the mobile site :-(

Back to the issue :

I have stop the raid.
mdadm --stop /dev/md2
mdadm: stopped /dev/md2

cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] 
unused devices: <none>

And when i try to re-assemble, the command is aborting :
mdadm --assemble --force /dev/md2 /dev/sd[bcde]1
mdadm: /dev/md2 assembled from 2 drives and 1 spare - not enough to start the array.


The command wan't re-acquire the sdc1 device :
cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] 
md2 : inactive sde1[4](S) sdd1[2](S) sdb1[5](S)
      4101001112 blocks super 1.2
       
unused devices: <none>


Many thank's for your help.
Best Regards
Sylvain Depuille

----- Mail original ----- 
De: "John Stoffel" <john@xxxxxxxxxxx> 
À: "sylvain depuille" <sylvain.depuille@xxxxxxxxxxx> 
Cc: "John Stoffel" <john@xxxxxxxxxxx> 
Envoyé: Mercredi 31 Décembre 2014 15:14:44 
Objet: Re: Re: Re : Re: Big trouble during reassemble a Raid5 


sylvain> I have remove the 1TB burny disk, and replace it by the 2TB 
sylvain> ddrescued of the burny disk. 

Great, 

sylvain> But i can't re-assemble the raid. 
sylvain> mdadm --assemble --force /dev/md2 /dev/sd[bcde]1 
sylvain> mdadm: /dev/sdb1 is busy - skipping 
sylvain> mdadm: /dev/sdd1 is busy - skipping 
sylvain> mdadm: /dev/sde1 is busy - skipping 
sylvain> mdadm: Merging with already-assembled /dev/md/2 
sylvain> mdadm: /dev/md/2 assembled from 2 drives and 1 spare - not enough to start the array. 

I think you first need to stop the array, to make sure all the devices 
aren't in use. Have you looked through the archives of this list for 
previous examples? 

So you should be able to do: 

> mdadm --stop md2 
> mdadm --assemble --force /dev/md2 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 

and send the output. You should also be replying to the mailing list 
as well, which I just noticed you haven't. My fault too! 


sylvain> cat /proc/mdstat : 
sylvain> Personalities : [raid1] [raid6] [raid5] [raid4] 
sylvain> md2 : inactive sdd1[2](S) sde1[4](S) sdb1[5](S) 
sylvain> 4101001112 blocks super 1.2 

sylvain> unused devices: <none> 

sylvain> The result of command mdadm -E /dev/sd[bcde]1 > mdadm-E-new.log is 
sylvain> /dev/sdb1: 
sylvain> Magic : a92b4efc 
sylvain> Version : 1.2 
sylvain> Feature Map : 0x1 
sylvain> Array UUID : 2a1440cd:762a90fb:e3bd2f4d:617acb0e 
sylvain> Name : le-bohec:2 (local to host le-bohec) 
sylvain> Creation Time : Tue Apr 9 17:56:19 2013 
sylvain> Raid Level : raid5 
sylvain> Raid Devices : 4 

sylvain> Avail Dev Size : 1953521072 (931.51 GiB 1000.20 GB) 
sylvain> Array Size : 2930276352 (2794.53 GiB 3000.60 GB) 
sylvain> Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB) 
sylvain> Data Offset : 2048 sectors 
sylvain> Super Offset : 8 sectors 
sylvain> Unused Space : before=1960 sectors, after=3504 sectors 
sylvain> State : clean 
sylvain> Device UUID : 8506e09c:b87a44ed:7b4ee314:777ce89c 

sylvain> Internal Bitmap : 8 sectors from superblock 
sylvain> Update Time : Sat Dec 27 22:08:34 2014 
sylvain> Bad Block Log : 512 entries available at offset 72 sectors 
sylvain> Checksum : bad52d25 - correct 
sylvain> Events : 167456 

sylvain> Layout : left-symmetric 
sylvain> Chunk Size : 512K 

sylvain> Device Role : Active device 0 
sylvain> Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing) 
sylvain> /dev/sdc1: 
sylvain> Magic : a92b4efc 
sylvain> Version : 1.2 
sylvain> Feature Map : 0x1 
sylvain> Array UUID : 2a1440cd:762a90fb:e3bd2f4d:617acb0e 
sylvain> Name : le-bohec:2 (local to host le-bohec) 
sylvain> Creation Time : Tue Apr 9 17:56:19 2013 
sylvain> Raid Level : raid5 
sylvain> Raid Devices : 4 

sylvain> Avail Dev Size : 1953517954 (931.51 GiB 1000.20 GB) 
sylvain> Array Size : 2930276352 (2794.53 GiB 3000.60 GB) 
sylvain> Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB) 
sylvain> Data Offset : 2048 sectors 
sylvain> Super Offset : 8 sectors 
sylvain> Unused Space : before=1968 sectors, after=386 sectors 
sylvain> State : clean 
sylvain> Device UUID : 44002aad:d3e17729:a93854eb:4139972e 

sylvain> Internal Bitmap : 8 sectors from superblock 
sylvain> Update Time : Sat Dec 27 22:08:22 2014 
sylvain> Checksum : 6f69285d - correct 
sylvain> Events : 167431 

sylvain> Layout : left-symmetric 
sylvain> Chunk Size : 512K 

sylvain> Device Role : Active device 1 
sylvain> Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing) 
sylvain> /dev/sdd1: 
sylvain> Magic : a92b4efc 
sylvain> Version : 1.2 
sylvain> Feature Map : 0x1 
sylvain> Array UUID : 2a1440cd:762a90fb:e3bd2f4d:617acb0e 
sylvain> Name : le-bohec:2 (local to host le-bohec) 
sylvain> Creation Time : Tue Apr 9 17:56:19 2013 
sylvain> Raid Level : raid5 
sylvain> Raid Devices : 4 

sylvain> Avail Dev Size : 1953517954 (931.51 GiB 1000.20 GB) 
sylvain> Array Size : 2930276352 (2794.53 GiB 3000.60 GB) 
sylvain> Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB) 
sylvain> Data Offset : 2048 sectors 
sylvain> Super Offset : 8 sectors 
sylvain> Unused Space : before=1968 sectors, after=1953507504 sectors 
sylvain> State : clean 
sylvain> Device UUID : 44002aad:d3e17729:a93854eb:4139972e 

sylvain> Internal Bitmap : 8 sectors from superblock 
sylvain> Update Time : Sat Dec 27 22:08:22 2014 
sylvain> Checksum : 6f692876 - correct 
sylvain> Events : 167456 

sylvain> Layout : left-symmetric 
sylvain> Chunk Size : 512K 

sylvain> Device Role : Active device 1 
sylvain> Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing) 
sylvain> /dev/sde1: 
sylvain> Magic : a92b4efc 
sylvain> Version : 1.2 
sylvain> Feature Map : 0x9 
sylvain> Array UUID : 2a1440cd:762a90fb:e3bd2f4d:617acb0e 
sylvain> Name : le-bohec:2 (local to host le-bohec) 
sylvain> Creation Time : Tue Apr 9 17:56:19 2013 
sylvain> Raid Level : raid5 
sylvain> Raid Devices : 4 

sylvain> Avail Dev Size : 4294963199 (2048.00 GiB 2199.02 GB) 
sylvain> Array Size : 2930276352 (2794.53 GiB 3000.60 GB) 
sylvain> Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB) 
sylvain> Data Offset : 2048 sectors 
sylvain> Super Offset : 8 sectors 
sylvain> Unused Space : before=1960 sectors, after=2341445631 sectors 
sylvain> State : clean 
sylvain> Device UUID : 0ebce28d:1a792d55:76a86538:12cc94dd 

sylvain> Internal Bitmap : 8 sectors from superblock 
sylvain> Update Time : Sat Dec 27 22:08:34 2014 
sylvain> Bad Block Log : 512 entries available at offset 72 sectors - bad blocks present. 
sylvain> Checksum : 3801cfa - correct 
sylvain> Events : 167456 

sylvain> Layout : left-symmetric 
sylvain> Chunk Size : 512K 

sylvain> Device Role : spare 
sylvain> Array State : A.A. ('A' == active, '.' == missing, 'R' == replacing) 


sylvain> Could you help me to re-assemble safely the raid array? 

sylvain> Thank's in advance 
sylvain> Best Regards 
sylvain> Sylvain Depuille (always in trouble). 

sylvain> ----- Mail original ----- 
sylvain> De: "John Stoffel" <john@xxxxxxxxxxx> 
sylvain> À: "Sylvain Depuille" <sylvain.depuille@xxxxxxxxxxx> 
sylvain> Cc: "John Stoffel" <john@xxxxxxxxxxx> 
sylvain> Envoyé: Mardi 30 Décembre 2014 22:07:04 
sylvain> Objet: Re: Re: Re : Re: Big trouble during reassemble a Raid5 


Sylvain> I'm front of the pc! The Pass1 of The ddrescue is Not 
Sylvain> finished! Sorry for The false News! 

sylvain> No problem. Let it finish before you make any other attempts to 
sylvain> re-assemble the array. 

Sylvain> Of the 5 pass tale the same Time, the command Will finish in 20 or 24 jours. 

Sylvain> Thank's in advance 
Sylvain> Best Regards 


Sylvain> Envoyé depuis un telephone portable 

>>> Le 29 déc. 2014 à 21:36, John Stoffel <john@xxxxxxxxxxx> a écrit : 
>>> 
>>> 
sylvain> Hi john, thanks for your answer! I have change a 1TB disk to 
sylvain> growing the raid with 3TB disk. if i can re-insert the old 
sylvain> 1TB disk in place of 3TB disk, only some log and history are 
sylvain> corrupted. i think that is the best way to relaunch the raid 
sylvain> without data loss. But i dont known how change the timestamp 
sylvain> of the one raid disk. Have you a magic command to change a 
sylvain> timestamp of a raid partition, and how known the timestamp of 
sylvain> the other disk of the raid? After' raid relaunch, i can 
sylvain> change the burn disk by a 3TB new one. To do the ddrescue, i 
sylvain> have a 2TB disk spare! Its not the same geometry, is it 
sylvain> possible? thanks in advance for your help 
>>> 
>>> Sylvain, 
>>> 
>>> Always glad to help here. I'm going to try and understand what you 
>>> wrote and do my best to reply. 
>>> 
>>> Is the 1Tb disk the bad disk? And if you re-insert it and re-start 
>>> the RAID5 array, you only have some minor lost files? If so, I would 
>>> probably just copy all the data off the RAID5 onto the single 3Tb disk 
>>> as a quick and dirty backup, then I'd use 'dd_rescue' to copy the bad 
>>> 1Tb disk onto the new 2Tb disk. 
>>> 
>>> All you would have to do is make a partition on the 2tb disk which is 
>>> the same size (or a little bigger) than the partition on the 1tb disk, 
>>> then copy the partition over like this: 
>>> 
>>> ddrescue /dev/sd[BAD DISK LETTER HERE]1 /dev/sd[2TB disk letter]1 \ 
>>> /tmp/rescue.log 
>>> 
>>> So say the bad disk is sdc, and the good 2tb is sdf, you would do: 
>>> 
>>> ddrescue /dev/sdc1 /dev/sdf1 /tmp/rescue.log 
>>> 
>>> and let it go. Then you would assemble the array using the NEW 2tb 
>>> disk. Ideally you would remove the bad 1tb disk from the system when 
>>> trying to do this. 
>>> 
>>> But you really do need send us the output of the following commands: 
>>> 
>>> cat /proc/mdstat 
>>> cat /proc/partitions 
>>> mdadm --detail /dev/md# 
>>> 
>>> do the above for the RADI5 array. 
>>> 
>>> mdadm --examine /dev/sd#1 
>>> 
>>> for each disk in the RAID5 array. 
>>> 
>>> And we can give you better advice. 
>>> 
>>> Good luck! 
>>> 
>>> 
sylvain> ---------------------------------- Sylvain Depuille 
sylvain> sylvain.depuille@xxxxxxxxxxx ----- Mail d'origine ----- De: 
sylvain> John Stoffel <john@xxxxxxxxxxx> À: sylvain depuille 
sylvain> <sylvain.depuille@xxxxxxxxxxx> Cc: linux-raid@xxxxxxxxxxxxxxx 
sylvain> Envoyé: Mon, 29 Dec 2014 19:32:04 +0100 (CET) Objet: Re: Big 
sylvain> trouble during reassemble a Raid5 
>>> 
sylvain> Sylvain, I would recommend that you buy a replacement disk 
sylvain> for the one throwing errors and then run dd_rescue to copy as 
sylvain> much data from the dying disk to the replacement. Then, and 
sylvain> only then, do you try to reassemble the array with the 
sylvain> --force option. That disk is dying, and dying quickly. Can 
sylvain> you also post the output of mdadm -E /dev/sd[bcde]1 for each 
sylvain> disk, even the dying one, so we can look at the counts and 
sylvain> give you some more advice. Also, the output of the mdadm 
sylvain> --assemble --force /dev/md2 /dev/sd[bcde]1 would also be 
sylvain> good. The more info the better. Good luck! John 
>>> 
sylvain> i'm sorry to ask this questions but the raid 5 with 4 disk is 
sylvain> in big trouble during re-assemble. 2 disks are out of order. 
sylvain> I have change a disk of the raid 5 (sde) to growing the raid. 
sylvain> But a second disk (sdc) have too many bad sector during the 
sylvain> re-assemble, and shutdown the re-assemble. "mdadm --assemble 
sylvain> --force /dev/md2 /dev/sd[bcde]1" I have try to correct bad 
sylvain> sectors with badblocks, but it's finished by no more spare 
sylvain> sectors and the disk still have some bad sector. badblocks -b 
sylvain> 512 -o badblocks-sdc.txt -v -n /dev/sdc 1140170000 1140169336 
sylvain> 1140169400 1140169401 1140169402 1140169403 1140169404 
sylvain> 1140169405 1140169406 1140169407 1140169416 1140169417 
sylvain> 1140169418 1140169419 1140169420 1140169421 1140169422 
sylvain> 1140169423 
>>> 
sylvain> For information the mdadm examine return : cat mdadm-exam.txt 
sylvain> /dev/sdb: MBR Magic : aa55 Partition[0] : 1953523120 sectors 
sylvain> at 2048 (type fd) /dev/sdc: MBR Magic : aa55 Partition[0] : 
sylvain> 1953520002 sectors at 63 (type fd) /dev/sdd: MBR Magic : aa55 
sylvain> Partition[0] : 1953520002 sectors at 63 (type fd) /dev/sde: 
sylvain> MBR Magic : aa55 Partition[0] : 4294965247 sectors at 2048 
sylvain> (type fd) I have 2 way to solve the issue. The first, is to 
sylvain> have special command to pass bad sector during re-assemble as 
sylvain> "mdadm --assemble --force /dev/md2 /dev/sd[bcde]1" The second 
sylvain> is change the disk sde with the old good one, but some datas 
sylvain> have been changed on the raid since i have remove it. But 
sylvain> these datas are not important. It's only logs and history 
sylvain> activity. What can i do to recover a maximum datas without 
sylvain> too many risk? Thank's in advance Best Regards 
sylvain> ---------------------------------- Sylvain Depuille (in 
sylvain> trouble) sylvain.depuille@xxxxxxxxxxx -- To unsubscribe from 
sylvain> this list: send the line "unsubscribe linux-raid" in the body 
sylvain> of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info 
sylvain> at http://vger.kernel.org/majordomo-info.html 
>>> 

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux