mdadm I/O error with Ddf RAID

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

We have developed a RAID creation application which create RAID with
Ddf RAID metadata. We are using PCIe ssd as physical disks. We are
writing the anchor, primary, secondary headers, virtual and physical
records, configuration record and physical disk data. The offsets of
the headers are updated in the primary, secondary and anchor headers
correctly. The problem is when we try to boot to Ubuntu server and we
observe that mdadm is throwing a disk failure error message and from
block layer we are getting rw=0, want=7, limit=1000215216. We also
confirmed using there is no I/O error is coming from the PCIe ssd,
using a logic analyzer. Also the limit value 1000215216 is the
capacity of the ssd in 512 byte blocks. Any insight will be highly
appreciated.

Regards,
Arka
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux