Re: [PATCH v5 08/13] SIW queue pair methods

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



-----"Jason Gunthorpe" <jgg@xxxxxxxx> wrote: -----

>To: "Bernard Metzler" <bmt@xxxxxxxxxxxxxx>
>From: "Jason Gunthorpe" <jgg@xxxxxxxx>
>Date: 03/06/2019 08:51PM
>Cc: linux-rdma@xxxxxxxxxxxxxxx
>Subject: Re: [PATCH v5 08/13] SIW queue pair methods
>
>On Tue, Feb 19, 2019 at 11:08:58AM +0100, Bernard Metzler wrote:
>> +/*
>> + * Must be called with SQ locked.
>> + * To avoid complete SQ starvation by constant inbound READ
>requests,
>> + * the active IRQ will not be served after qp->irq_burst, if the
>> + * SQ has pending work.
>> + */
>> +int siw_activate_tx(struct siw_qp *qp)
>> +{
>> +	struct siw_sqe	*irqe, *sqe;
>> +	struct siw_wqe	*wqe = tx_wqe(qp);
>> +	int rv = 1;
>> +
>> +	irqe = &qp->irq[qp->irq_get % qp->attrs.irq_size];
>> +
>> +	if (irqe->flags & SIW_WQE_VALID) {
>> +		sqe = sq_get_next(qp);
>> +
>> +		/*
>> +		 * Avoid local WQE processing starvation in case
>> +		 * of constant inbound READ request stream
>> +		 */
>> +		if (sqe && ++qp->irq_burst >= SIW_IRQ_MAXBURST_SQ_ACTIVE) {
>> +			qp->irq_burst = 0;
>> +			goto skip_irq;
>> +		}
>> +		memset(wqe->mem, 0, sizeof(*wqe->mem) * SIW_MAX_SGE);
>> +		wqe->wr_status = SIW_WR_QUEUED;
>> +
>> +		/* start READ RESPONSE */
>> +		wqe->sqe.opcode = SIW_OP_READ_RESPONSE;
>> +		wqe->sqe.flags = 0;
>> +		if (irqe->num_sge) {
>> +			wqe->sqe.num_sge = 1;
>> +			wqe->sqe.sge[0].length = irqe->sge[0].length;
>> +			wqe->sqe.sge[0].laddr = irqe->sge[0].laddr;
>> +			wqe->sqe.sge[0].lkey = irqe->sge[0].lkey;
>> +		} else {
>> +			wqe->sqe.num_sge = 0;
>> +		}
>> +
>> +		/* Retain original RREQ's message sequence number for
>> +		 * potential error reporting cases.
>> +		 */
>> +		wqe->sqe.sge[1].length = irqe->sge[1].length;
>> +
>> +		wqe->sqe.rkey = irqe->rkey;
>> +		wqe->sqe.raddr = irqe->raddr;
>> +
>> +		wqe->processed = 0;
>> +		qp->irq_get++;
>> +
>> +		/* mark current IRQ entry free */
>> +		smp_store_mb(irqe->flags, 0);
>
>I really dislike seeing attempts at lock-free constructions like this
>in drivers... 
>
in fact I do not need memory barriers here, since the memory is
not mapped. It is the inbound read queue, and both receive
processing and send processing accessing it under a
send queue spinlock only.

receive side is adding to the queue if a new read request
comes in, send queue processing serves the IRQ as well
as the SQ.


>.. and I'm having doubts it is right, as directly above we read
>irqe->flags without a barrier/atomic/etc in the if.
>
right

>What is this attempting to do, and why can't it have an standard
>scheme?
>
>'lock-free' with test_bit/set_bit/clear_bit atomics would be better
>that this..
>
>Same remark for all these magic cases.
>
>Jason
>
>




[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux