Re: Creating RAID5 with four devices and end up with 5 (one removed and one spare). Why?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon Mar 03, 2008 at 09:55:45AM +0100, Tor Arne Vestbø wrote:

> Hi!
>
> I'm trying to build a Linux RAID5 with four (4) 750GB disks, but not matter 
> what I do I end up with mdadm listing five (5) devices and telling me that 
> one of them is a spare, and another one is failed/removed. I've been 
> googling and reading HOWTOs for a week now, but can't figure it out. Here's 
> what I do:
>
> monstre:~/buildroot # mdadm --create /dev/md0 --level=5 --raid-devices=4 
> /dev/sd[cdef]1
>
> mdadm: array /dev/md0 started.
>
> monstre:~/buildroot # cat /proc/mdstat
> Personalities : [raid6] [raid5] [raid4]
> md0 : active(auto-read-only) raid5 sdf1[4](S) sde1[2] sdd1[1] sdc1[0]
>       2197715712 blocks level 5, 64k chunk, algorithm 2 [4/3] [UUU_]
>
> unused devices: <none>
>
>
> monstre: # mdadm --examine /dev/sdd1
> /dev/sdd1:
>           Magic : a92b4efc
>         Version : 00.90.00
>            UUID : a0186556:4ffb5a2a:822f8875:94ae7d2c
>   Creation Time : Sun Mar  2 22:52:53 2008
>      Raid Level : raid5
>   Used Dev Size : 732571904 (698.64 GiB 750.15 GB)
>      Array Size : 2197715712 (2095.91 GiB 2250.46 GB)
>    Raid Devices : 4
>   Total Devices : 4
> Preferred Minor : 0
>
>     Update Time : Sun Mar  2 22:59:54 2008
>           State : clean
>  Active Devices : 3
> Working Devices : 4
>  Failed Devices : 1
>   Spare Devices : 1
>        Checksum : 6b5e8442 - correct
>          Events : 0.22
>
>          Layout : left-symmetric
>      Chunk Size : 64K
>
>       Number   Major   Minor   RaidDevice State
> this     1       8       49        1      active sync   /dev/sdd1
>
>    0     0       8       33        0      active sync   /dev/sdc1
>    1     1       8       49        1      active sync   /dev/sdd1
>    2     2       8       65        2      active sync   /dev/sde1
>    3     3       0        0        3      faulty removed
>    4     4       8       81        4      spare   /dev/sdf1
>
> -------------------
>
> So what i don't get is:
>
> 1. Why is mdadm --examine listing "3     3       0        0        3   
> faulty removed" and telling me I have a failed device?
> 2. Why is one of the actual disks (sdf) used as a spare, even though I 
> didn't ask for it?
>
> Thanks for any tips or insights which may put me on the right track :)
>
This is perfectly normal (and explained in the manual page) - the RAID5
array is created in an initially degraded state, then rebuilt.  This
means the array can be available for use immediately, with the rebuild
taking place in the background.  You'll need to run 'mdadm -w /dev/md0'
to force the array into read-write mode (it's currently started in
auto-read-only mode) and the resync will then begin.

HTH,
        Robin
-- 
     ___        
    ( ' }     |       Robin Hill        <robin@xxxxxxxxxxxxxxx> |
   / / )      | Little Jim says ....                            |
  // !!       |      "He fallen in de water !!"                 |

Attachment: pgpim8qSnbpxj.pgp
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux