Re: Failed Raid 5 - one Disk possibly Out of date - 2nd disk damaged

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks a lot.
Will try to get some new drives and do a dd and then will try to
assemble the Raid again.

The Drives are CMR Drives, a few Western Digital and Seagate drives.

Regards

Martin

Am Mi., 17. Nov. 2021 um 18:56 Uhr schrieb Wols Lists
<antlists@xxxxxxxxxxxxxxx>:
>
> On 17/11/2021 12:22, Martin Thoma wrote:
> > Hi All,
> >
>
>
> >
> > So /dev/sdd1 was considered , when i ran the command again the raid
> > assembled without sdd1
> >
> > When i tried Reading Data after a while it stopped (propably when the
> > data was on /dev/sdc
> >
> > dmesg showed this:
> > [  368.433658] sd 8:0:0:1: [sdc] tag#0 FAILED Result: hostbyte=DID_OK
> > driverbyte=DRIVER_SENSE
> > [  368.433664] sd 8:0:0:1: [sdc] tag#0 Sense Key : Medium Error [current]
> > [  368.433669] sd 8:0:0:1: [sdc] tag#0 Add. Sense: Unrecovered read error
> > [  368.433675] sd 8:0:0:1: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 00
> > 00 08 81 d8 00 00 00 08 00 00
> > [  368.433679] blk_update_request: critical medium error, dev sdc, sector 557528
> > [  368.433689] raid5_end_read_request: 77 callbacks suppressed
> > [  368.433692] md/raid:md0: read error not correctable (sector 555480 on sdc1).
> > [  375.944254] sd 8:0:0:1: [sdc] tag#0 FAILED Result: hostbyte=DID_OK
> > driverbyte=DRIVER_SENSE
> >
> > and the Raided stopped again.
> >
> > How can i force to assemble the raid including /dev/sdd1 and not
> > include /dev/sdc (because that drive is possibly damaged now)?
> > With a mdadm --create --assume-clean .. command?
>
> NO NO NO NO NO !!!
> >
> > I'm using  mdadm/zesty-updates,now 3.4-4ubuntu0.1 amd64 [installed] on
> > Linux version 4.10.0-21-generic (buildd@lgw01-12) (gcc version 6.3.0
> > 20170406 (Ubuntu 6.3.0-12ubuntu2) )
> >
> That's an old ubuntu? and an ancient mdadm 3.4?
>
> As a very first action, you need to source a much newer rescue disk!
>
> As a second action, if you think sdc and sdd are dodgy, then you need to
> replace them - use dd or ddrescue to do a brute-force copy.
>
> You don't mention what drives they are. Are they CMR? Are they suitable
> for raid? For replacement drives, I'd look at upsizing to 4TB for a bit
> of headroom maybe (or look at moving to raid 6). And look at Seagate
> IronWolf, WD Red *PRO*, or Toshiba N300. (Personally I'd pass on the WD ...)
>
> Once you've copied sdc and sdd, you can look at doing another force
> assemble, and you'll hopefully get your array back. At least the event
> count info implies damage to the array should be minimal.
>
> https://raid.wiki.kernel.org/index.php/Linux_Raid#When_Things_Go_Wrogn
>
> Read, learn, and inwardly digest ...
>
> And DON'T do anything that will make changes to the disks - like a
> re-create!!!
>
> Cheers,
> Wol



-- 
With kind regards / Mit freundlichen Grüßen

Martin Thoma

Göhrenstraße 3
72414 Rangendingen

Cell:  0176 80 16 03 68

Mail:  Thoma-Martin@xxxxxxx




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux