Re: mdadm I/O error with Ddf RAID

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 11 2016, Arka Sharma wrote:

> Hi All,
>
> We have developed a RAID creation application which create RAID with
> Ddf RAID metadata. We are using PCIe ssd as physical disks. We are
> writing the anchor, primary, secondary headers, virtual and physical
> records, configuration record and physical disk data. The offsets of
> the headers are updated in the primary, secondary and anchor headers
> correctly. The problem is when we try to boot to Ubuntu server and we
> observe that mdadm is throwing a disk failure error message and from
> block layer we are getting rw=0, want=7, limit=1000215216. We also
> confirmed using there is no I/O error is coming from the PCIe ssd,
> using a logic analyzer. Also the limit value 1000215216 is the
> capacity of the ssd in 512 byte blocks. Any insight will be highly
> appreciated.
>

It looks like mdadm is attempting a 4K read starting at the last sector.

Possibly the ssd's report a physical sector size of 4K.

I don't know how DDF is supposed to work on a device like that.
Should the anchor be at the start of the last 4K block,
or in the last 512byte virtual block?

DDF support in mdadm was written with the assumption of 512 byte blocks.

I'm not at all certain this is the cause of the problem though.

I would suggest starting by finding out which READ request in mdadm is
causing the error.

NeilBrown

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux