Re: [PATCH RFC] nvme-rdma: support devices with queue size < 32

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>> Maybe it'll be better if we do:
>>>
>>> static inline bool queue_sig_limit(struct nvme_rdma_queue *queue)
>>> {
>>> 	return (++queue->sig_count % (queue->queue_size / 2)) == 0;
>>> }
>>>
>>> And lose the hard-coded 32 entirely. Care to test that?
>>
>> Hello Sigi,
>> I agree with you, we've found a setup where the signalling every queue
>> depth is not enough and we're testing the division by two that seems
>> to work fine till now.
>>
>> In your version in case of queue length > 32 the notifications would
>> be sent less often that they are now. I'm wondering if it will have
>> impact on performance and internal card buffering (it seems that
>> Mellanox buffers are ~100 elements). Wouldn't it create issues?
>>
>> I'd like see the magic constant removed. From what I can see we
>> need to have something not exceeding send buffer of the card but
>> also not lower than the queue depth. What do you think?
> 
> I'm not sure what buffering is needed from the device at all in this
> case, the device is simply expected to avoid signaling completions.
> 
> Mellanox folks, any idea where is this limitation coming from?
> Do we need a device capability for it?

In the case of mlx5 the we're getting -ENOMEM from begin_wqe
(condition on mlx5_wq_overflow). This queue is sized in the driver
based on multiple factors. If we ack less often, this could
happen for higher queue depths too, I think.

Marta
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux