Re: Raid 6, 9 1.5T days drives, 2 "fail" one after the other

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 07 Feb 2013 10:39:53 -0500 Dragos Dobrescu <dragos@xxxxxxxxxxx> wrote:

> Hi,
> I need some help.
> I noticed the server was in recovery mode. It had just dropped a "faulty" drive. I checked the drive and it looked like it was working. When the recovery was done I added the drive back, after recreating the partition. 
> 
> As soon as I did it, mdadm informed me that it redropped it and dropped another drive at the same time. I removed both driver and added a brand new drive (second is on the way) which the system accepted and started recovery. 
> 
> What I don't understand is that I plugged the drives on another computer with sata-USB adapter and performed a full smart checkup which returned successful, minus a few bad sectors and some warnings of pas over heating.
>  What is going on? 
> Thank you for your help.
> 
> Dragos
> 

No one replied?  I felt sure someone else would.

Maybe you  have a problem with your driver card, or with a cable, or
something.

Normally if md stops a driver there will be messages in the kernel log about
access failures.  Do you still have all the logs from when this  happened?
Are there any messages from the kernel?

NeilBrown

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux