Re: blktest failures

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/15/22 02:59, Yanjun Zhu wrote:
> 在 2022/4/15 15:46, Bob Pearson 写道:
>> On 4/15/22 02:37, Yanjun Zhu wrote:
>>>
>>> 在 2022/4/15 15:29, Bob Pearson 写道:
>>>> On 4/15/22 02:12, Yanjun Zhu wrote:
>>>>> 在 2022/4/10 5:43, Bob Pearson 写道:
>>>>>> On 4/9/22 00:04, Christoph Hellwig wrote:
>>>>>>> On Fri, Apr 08, 2022 at 04:25:12PM -0700, Bart Van Assche wrote:
>>>>>>>> One of the functions in the above call stack is sd_remove(). sd_remove()
>>>>>>>> calls del_gendisk() just before calling sd_shutdown(). sd_shutdown()
>>>>>>>> submits the SYNCHRONIZE CACHE command. In del_gendisk() I found the
>>>>>>>> following comment: "Fail any new I/O". Do you agree that failing new I/O
>>>>>>>> before sd_shutdown() is called is wrong? Is there any other way to fix this
>>>>>>>> than moving the blk_queue_start_drain() etc. calls out of del_gendisk() and
>>>>>>>> into a new function?
>>>>>>> That SYNCHRONIZE CACHE is a passthrough command sent on the request_queue
>>>>>>> and should not be affected by stopping all file system I/O.
>>>>>> When I run check -q srp
>>>>>> all the test cases pass but each one stops for 3+ minutes at synchronize cache.
>>>>>> The rxe device is still active until sync cache returns when the last QP and the PD
>>>>>> are destroyed. It may be that the queues are blocked waiting for something else
>>>>>> even though they have reported success??
>>>>> If you remove all the xarray patches and use the original source code. This will not occur.
>>>>>
>>>>> Zhu Yanjun
>>>>>
>>>> I missed one other point. The 3 minute delay is actually not a rxe bug at all but was recently
>>>> caused by a bad scsi patch which has since been reverted.
>>>
>>> I am not sure about this because wr NULL problem exists with xarray patches.
>>>
>>> Please let us find the root cause of wr NULL.
>>>
>>> This can make RXE more stable.
>>>
>>> Zhu Yanjun
>>>
>>
>> You mean mr = NULL. And it is not happening in my tree. I have WARN_ONs looking for it
> 
> Why you said that you can not reproduce this problem now?
> 
> Please check your mail. I remember that you can reproduce this wr NULL in your host.

Yes I did see it before but not with the new pool patches. The whole point of the patch series I
have been working on for the past month was to clean up races in shutdown and cleanup code.
These were demonstrated by colleagues here at hpe who are running lustre over soft roce.
I looked at the shutdown code and saw that the whole design was flawed. We were using
kref's to track reference counts on MRs and QPs etc. but always returning to rdma-core
regardless of the reference count. rdma-core having no visibility to our references
then deletes the object fairly soon after that while we are still trying to process late
arriving packets. One of the features in my tree is to use wait_for_completion() in the
return path to rdma-core and pause until all the references have been dropped.

I really don't see the point of debugging the old code because it is just wrong wrong wrong.
It will never be stable under heavy load.

Bob
> 
> Please focus on this problem. This can make RXE more stable.
> Let us find the root cause and fix this problem ASAP.
> 
> Thanks.
> Zhu Yanjun
> 
>> and it isn't happening.
> 




[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux