Re: Inoperative array shown as "active"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/14/2013 12:59 AM, NeilBrown wrote:
> On Sat, 14 Sep 2013 00:39:20 -0500 Ian Pilcher <arequipeno@xxxxxxxxx> wrote:
>> AFAICT, this means that there is no single item in either /proc/mdstat
>> or sysfs that indicates that an array such as the example above has
>> failed.  My program will have to parse the RAID level, calculated the
>> number of failed members (if any), and determine whether that RAID level
>> can survive that number of failures.  Is this correct?
> 
> Yes.
> 
>>
>> Anything I'm missing?
> 
> mdadm already does this for you. "mdadm --detail /dev/md0".
> 

Yeah, I haven't yet ruled out calling out to mdadm.  I'm already doing
that with hddtemp and smartctl.  It just seems a bit inefficient to do
so when all of the information is sitting right there in /proc/mdstat.

A quick test reveals that running "mdadm --detail /dev/md?*" takes
around 2 seconds on the NAS and produces about 20KB of output.  (I have
20 RAID devices -- hooray GPT! -- and an Atom processor.)  Hmmm.

Thanks for the very quick response!

-- 
========================================================================
Ian Pilcher                                         arequipeno@xxxxxxxxx
Sometimes there's nothing left to do but crash and burn...or die trying.
========================================================================
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux