Re: bug report for rdma_rxe

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/25/22 17:58, Jason Gunthorpe wrote:
> On Mon, Apr 25, 2022 at 11:58:55AM -0500, Bob Pearson wrote:
>> On 4/24/22 19:04, Yanjun Zhu wrote:
>>> 在 2022/4/23 5:04, Bob Pearson 写道:
>>>> Local operations in the rdma_rxe driver are not obviously idempotent. But, the
>>>> RC retry mechanism backs up the send queue to the point of the wqe that is
>>>> currently being acknowledged and re-walks the sq. Each send or write operation is
>>>> retried with the exception that the first one is truncated by the packets already
>>>> having been acknowledged. Each read and atomic operation is resent except that
>>>> read data already received in the first wqe is not requested. But all the
>>>> local operations are replayed. The problem is local invalidate which is destructive.
>>>> For example
>>>
>>> Is there any example or just your analysis?
>>
>> I have a colleague at HPE who is testing Lustre/o2iblnd/rxe. They are testing over a
>> highly reliable network so do not expect to see dropped or out of order packets. But they
>> see multiple timeout flows. When working on rping a week ago I also saw lots of timeouts
>> and verified that the timeout code in rxe has the behavior that when a new RC operation is
>> sent the retry timer is modified to go off at jiffies + qp->timeout_jiffies but only if
>> there is not a currently pending timer. Once set it is never cleared so it will fire
>> typically a few msec later initiating a retry flow. If IO operations are frequent then
>> there will be a timeout every few msec (about 20 times a second for typical timeout values.)
>> o2iblnd uses fast reg MRs to write data to the target system and then local invalidate
>> operations to invalidate the MR and then increments the key portion of the rkey and resets
>> the map and then does a reg mr operation. Retry flows cause the local invalidate and reg MR
>> operations to be re-executed over and over again. A single retry can cause a half a dozen
>> invalidate operations to be run with various rkeys and they mostly fail because they don't
>> match the current MR. This results in Lustre crashing.
>>
>> Currently I am actually happy that the unneeded retries are happening because it makes
>> testing the retry code a lot easier. But eventually it would be good to clear or reset the timer
>> after the operation is completed which would greatly reduce the number of retries. Also
>> it will be important to figure out how the IBA intended for local invalidates and reg MRs to
>> work. The way they are now they cannot be successfully retried. Also marking them as done
>> and skipping them in the retry sequence does not work. (It breaks some of the blktests test
>> cases.)
> 
> local operations on a QP are not supposed to need retry because they
> are not supposed to go on the network, so backing up the sq past its
> current position should not re-execute any local operations until the
> sq passes its actual head.
> 
> Or, stated differently, you have a head/tail pointer for local work
> and a head/tail pointer for network work and the two track
> independently within the defined ordering constraints.
> 
> Jason

This is a strong constraint on the send queue but is the only sane solution I suspect.
It implies that not attempting to redo local operations implies that the verbs consumer
must guarantee that they can safely change the MR/MW state as soon as the operation is
executed for the first time. This means that either there is a fence or they have seen
the completion of all IO operations that depend on the memory. It is not clear that all
test cases obey these rules or that they don't. We should WARN on those situations where
we can see a violation.

There is another source of errors in the driver that we now suspect. The send queue is
variously owned by three or more threads: verbs API calls which post operations,
the requester tasklet which turns wqe's into RoCE request packets and the completer tasklet
that responds to RoCE ack/read-reply packets (for RC) or marking the wqe as done (for UD/UC).
Both tasklets read and write wqe fields but do not use any locking to enforce consistency.
For normal flows this is mostly OK because the wqe is either only accessed by the requester
tasklet or later the completer tasklet. But in retry flows they can overlap. There needs to
be a clear ownership of the wqes by one tasklet or the other and memory barriers at the
hand-offs.

The wqe->state variable should indicate which tasklet owns the wqe and a lock should be held
when the state is loaded or changed. The retry prep routine req_retry() should hold the lock for the
duration of re-marking the wqes.

Bob



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux