RE: broken CRCs at NVMeF target with SIW & NVMe/TCP transports

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



-----"Sagi Grimberg" <sagi@xxxxxxxxxxx> wrote: -----

>To: "Christoph Hellwig" <hch@xxxxxx>, "Krishnamraju Eraparaju"
><krishna2@xxxxxxxxxxx>
>From: "Sagi Grimberg" <sagi@xxxxxxxxxxx>
>Date: 03/17/2020 05:04PM
>Cc: "Bernard Metzler" <BMT@xxxxxxxxxxxxxx>,
>linux-nvme@xxxxxxxxxxxxxxxxxxx, linux-rdma@xxxxxxxxxxxxxxx,
>"Nirranjan Kirubaharan" <nirranjan@xxxxxxxxxxx>, "Potnuri Bharat
>Teja" <bharat@xxxxxxxxxxx>
>Subject: [EXTERNAL] Re: broken CRCs at NVMeF target with SIW &
>NVMe/TCP transports
>
>> On Mon, Mar 16, 2020 at 09:50:10PM +0530, Krishnamraju Eraparaju
>wrote:
>>>
>>> I'm seeing broken CRCs at NVMeF target while running the below
>program
>>> at host. Here RDMA transport is SoftiWARP, but I'm also seeing the
>>> same issue with NVMe/TCP aswell.
>>>
>>> It appears to me that the same buffer is being rewritten by the
>>> application/ULP before getting the completion for the previous
>requests.
>>> getting the completion for the previous requests. HW based
>>> HW based trasports(like iw_cxgb4) are not showing this issue
>because
>>> they copy/DMA and then compute the CRC on copied buffer.
>> 
>> For TCP we can set BDI_CAP_STABLE_WRITES.  For RDMA I don't think
>that
>> is a good idea as pretty much all RDMA block drivers rely on the
>> DMA behavior above.  The answer is to bounce buffer the data in
>> SoftiWARP / SoftRoCE.
>
>We already do, see nvme_alloc_ns.
>
>

Krishna was getting the issue when testing TCP/NVMeF with -G
during connect. That enables data digest and STABLE_WRITES
I think. So to me it seems we don't get stable pages, but
pages which are touched after handover to the provider.





[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux