RAID5 now recognized as RAID1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

For a bit of context :I had a RAID5 with 4 disks running on a QNAP NAS.
One disk started failing, so I ordered a replacement disk, but in the mean time the NAS became irresponsive and I had to reboot it.
Now the NAS does not (really) come back alive, and I can only log onto it with ssh.

When I run cat /proc/mdstatus, this is what I get :
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md322 : active raid1 sdd5[3](S) sdc5[2](S) sdb5[1] sda5[0]
      7235136 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md256 : active raid1 sdd2[3](S) sdc2[2](S) sdb2[1] sda2[0]
      530112 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sdc4[24] sda4[1] sdb4[0] sdd4[25]
      458880 blocks super 1.0 [24/4] [UUUU____________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sdb1[0] sdd1[25] sdc1[24] sda1[26]
      530048 blocks super 1.0 [24/4] [UUUU____________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>

So, I don’t know how this could happen ? I looked up on the FAQ, but I can’t seem to see what could explain this, nor how I can recover from this ?

Any help appreciated.

Thanks



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux