Re: [PATCH v4 17/25] ibnbd: client: main functionality

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/18/19 12:14 AM, Danil Kipnis wrote:
I'm not familiar with dm code, but don't they need to deal with the
same situation: if I configure 100 logical volumes on top of a single
NVME drive with X hardware queues, each queue_depth deep, then each dm
block device would need to advertise X hardware queues in order to
achieve highest performance in case only this one volume is accessed,
while in fact those X physical queues have to be shared among all 100
logical volumes, if they are accessed in parallel?

Combining multiple queues (a) into a single queue (b) that is smaller than the combined source queues without sacrificing performance is tricky. We already have one such implementation in the block layer core and it took considerable time to get that implementation right. See e.g. blk_mq_sched_mark_restart_hctx() and blk_mq_sched_restart().

dm drivers are expected to return DM_MAPIO_REQUEUE or DM_MAPIO_DELAY_REQUEUE if the queue (b) is full. It turned out to be difficult to get this right in the dm-mpath driver and at the same time to achieve good performance.

The ibnbd driver introduces a third implementation of code that combines multiple (per-cpu) queues into one queue per CPU. It is considered important in the Linux kernel to avoid code duplication. Hence my question whether ibnbd can reuse the block layer infrastructure for sharing tag sets.

Thanks,

Bart.





[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux