RE: RAID 10 with 2 failed drives

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>On 9/21/19 2:38 PM, Liviu Petcu wrote:
>> 
>> # mdadm --assemble --force /dev/md1 /dev/loop0p2 /dev/loop1p2 /dev/loop2p2 /dev/loop3p2 /dev/loop4p2 /dev/loop5p2
>> mdadm: cannot open device /dev/loop0p2: No such file or directory
>> mdadm: /dev/loop0p2 has no superblock - assembly aborted
>> 
>> It is wrong? Or something I broke?

>You need to run kpartx -av on each loopback device and then access the partitions in /dev/mapper.

I used kpartx like this:

#kpartx -avr /mnt/usb/discX.img

The results:
# ls -al /dev/mapper/

total 0
drwxr-xr-x  2 root root     260 Sep 22 12:16 .
drwxr-xr-x 19 root root   10440 Sep 22 12:16 ..
crw-------  1 root root 10, 236 Sep 22 07:05 control
lrwxrwxrwx  1 root root       7 Sep 22 12:15 loop0p1 -> ../dm-0
lrwxrwxrwx  1 root root       7 Sep 22 12:15 loop0p2 -> ../dm-1
lrwxrwxrwx  1 root root       7 Sep 22 12:15 loop1p1 -> ../dm-2
lrwxrwxrwx  1 root root       7 Sep 22 12:15 loop1p2 -> ../dm-3
lrwxrwxrwx  1 root root       7 Sep 22 12:16 loop2p1 -> ../dm-4
lrwxrwxrwx  1 root root       7 Sep 22 12:16 loop2p2 -> ../dm-5
lrwxrwxrwx  1 root root       7 Sep 22 12:16 loop3p1 -> ../dm-6
lrwxrwxrwx  1 root root       7 Sep 22 12:16 loop3p2 -> ../dm-7
lrwxrwxrwx  1 root root       7 Sep 22 12:16 loop4p1 -> ../dm-8
lrwxrwxrwx  1 root root       7 Sep 22 12:16 loop4p2 -> ../dm-9

#cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md126 : active raid1 dm-8[6](F) dm-6[0] dm-4[7](F) dm-2[8](F)
      4193216 blocks [6/1] [U_____]

md127 : inactive dm-9[3](S) dm-7[0](S) dm-5[5](S) dm-3[4](S) dm-1[2](S)
      9746595840 blocks

unused devices: <none>

But comparing with the mdadm status at the time of failure, it looks different:

#cat mdstat.txt

Personalities : [raid1] [raid10] 
md1 : active raid10 sdb2[0] sda2[5] sdf2[4] sde2[3] sdd2[6](F) sdc2[7](F)
      5835374592 blocks 256K chunks 2 offset-copies [6/4] [U__UUU]
      
md0 : active raid1 sdb1[0] sda1[5] sdf1[4] sde1[3] sdd1[6](F) sdc1[7](F)
      4193216 blocks [6/4] [U__UUU]

My question is: is it possible that these differences are due to the fact that I initially started the system with a live cd to copy the discs? 
I saw then that it tried to start the raid and then I shutdown the system and removed the disks and installed them on another computer to copy them...

Thank you.






[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux