Re: dmadm question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Drive is exact same model as old one. Output of requested commands:

# mdadm --manage /dev/md127 --remove /dev/sdb
mdadm: hot removed /dev/sdb from /dev/md127
# mdadm --zero /dev/sdb
# mdadm --manage /dev/md127 --add /dev/sdb
mdadm: added /dev/sdb
# ps aux | grep mdmon
root      1937  0.0  0.1  10492 10484 ?        SLsl 14:04   0:00 mdmon md127
root      2055  0.0  0.0   2420   928 pts/0    S+   14:06   0:00 grep mdmon

md: unbind<sdb>
md: export_rdev(sdb)
md: bind<sdb>



On Sun, September 14, 2014 5:31 pm, NeilBrown wrote:
> On 12 Sep 2014 18:49:54 -0700 Luke Odom <luke@xxxxxxxxxxxx> wrote:
>
>>   I had a raid1 subarray running within an imsm container. One of the
>> drives died so I replaced it. I can get the new drive into the imsm
>> container but I canâ??t add it to the raid1 array within that
>> container. Iâ??ve read the man page and canâ??t see to figure it out.
>> Any help would be greatly appreciated. Using mdadm 3.2.5 on debian
>> squeeze. 
>
> This  should just happen automatically.  As soon as you add the device to
> the
> container, mdmon notices and adds it to the raid1.
>
> However it appears not to have happened...
>
> I assume the new drive is exactly the same size as the old drive?
> Try removing the new device from md127, run "mdadm --zero" on it, then add
> it
> back again.
> Do any messages appear in the kernel logs when you do that?
>
> Is "mdmon md127" running?
>
> NeilBrown
>
>
>>
>>
>>
>>
>> root@ds6790:~# cat /proc/mdstat
>> Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
>> md126 : active raid1 sda[0]
>>       976759808 blocks super external:/md127/0 [2/1] [U_]
>>
>>
>>
>>
>> md127 : inactive sdb[0](S) sda[1](S)
>>       4901 blocks super external:imsm
>>
>>
>>
>>
>> unused devices: <none>
>>
>>
>>
>>
>>
>>
>> root@ds6790:~# mdadm --detail /dev/md126
>> /dev/md126:
>>       Container : /dev/md127, member 0
>>      Raid Level : raid1
>>      Array Size : 976759808 (931.51 GiB 1000.20 GB)
>>   Used Dev Size : 976759940 (931.51 GiB 1000.20 GB)
>>    Raid Devices : 2
>>   Total Devices : 1
>>
>>
>>
>>
>>           State : active, degraded 
>>  Active Devices : 1
>> Working Devices : 1
>>  Failed Devices : 0
>>   Spare Devices : 0
>>
>>
>>
>>
>>
>>
>>
>>
>>            UUID : 1be60edf:5c16b945:86434b6b:2714fddb
>>     Number   Major   Minor   RaidDevice State
>>        0       8        0        0      active sync  
>> /dev/sda
>>        1       0        0        1      removed
>>
>>
>>
>>
>>
>>
>> root@ds6790:~# mdadm --examine /dev/md127
>> /dev/md127:
>>           Magic : Intel Raid ISM Cfg Sig.
>>         Version : 1.1.00
>>     Orig Family : 6e37aa48
>>          Family : 6e37aa48
>>      Generation : 00640a43
>>      Attributes : All supported
>>            UUID : ac27ba68:f8a3618d:3810d44f:25031c07
>>        Checksum : 513ef1f6 correct
>>     MPB Sectors : 1
>>           Disks : 2
>>    RAID Devices : 1
>>
>>
>>
>>
>>   Disk00 Serial : 9XG3RTL0
>>           State : active
>>              Id : 00000002
>>     Usable Size : 1953519880 (931.51 GiB 1000.20 GB)
>>
>>
>>
>>
>> [Volume0]:
>>            UUID : 1be60edf:5c16b945:86434b6b:2714fddb
>>      RAID Level : 1
>>         Members : 2
>>           Slots : [U_]
>>     Failed disk : 1
>>       This Slot : 0
>>      Array Size : 1953519616 (931.51 GiB 1000.20 GB)
>>    Per Dev Size : 1953519880 (931.51 GiB 1000.20 GB)
>>   Sector Offset : 0
>>     Num Stripes : 7630936
>>      Chunk Size : 64 KiB
>>        Reserved : 0
>>   Migrate State : idle
>>       Map State : degraded
>>     Dirty State : dirty
>>
>>
>>
>>
>>   Disk01 Serial : XG3RWMF
>>           State : failed
>>              Id : ffffffff
>>     Usable Size : 1953519880 (931.51 GiB 1000.20 GB)
>>
>>
>>
>>
>>
>
>

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux