Re: [PATCH 3/3] blk-mq: Use llist_head for blk_cpu_done

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Dec 04, 2020 at 08:13:56PM +0100, Sebastian Andrzej Siewior wrote:
> With llist_head it is possible to avoid the locking (the irq-off region)
> when items are added. This makes it possible to add items on a remote
> CPU.
> llist_add() returns true if the list was previously empty. This can be
> used to invoke the SMP function call / raise sofirq only if the first
> item was added (otherwise it is already pending).
> This simplifies the code a little and reduces the IRQ-off regions. With
> this change it possible to reduce the SMP-function call a simple
> __raise_softirq_irqoff().
> blk_mq_complete_request_remote() needs a preempt-disable section if the
> request needs to complete on the local CPU. Some callers (USB-storage)
> invoke this preemptible context and the request needs to be enqueued on
> the same CPU as the softirq is raised.
> 
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
> ---
>  block/blk-mq.c         | 77 ++++++++++++++----------------------------
>  include/linux/blkdev.h |  2 +-
>  2 files changed, 27 insertions(+), 52 deletions(-)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 3c0e94913d874..b5138327952a4 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -41,7 +41,7 @@
>  #include "blk-mq-sched.h"
>  #include "blk-rq-qos.h"
>  
> +static DEFINE_PER_CPU(struct llist_head, blk_cpu_done);
>  
>  static void blk_mq_poll_stats_start(struct request_queue *q);
>  static void blk_mq_poll_stats_fn(struct blk_stat_callback *cb);
> @@ -567,68 +567,32 @@ void blk_mq_end_request(struct request *rq, blk_status_t error)
>  }
>  EXPORT_SYMBOL(blk_mq_end_request);
>  
> +static void blk_complete_reqs(struct llist_head *cpu_list)
>  {
> +	struct llist_node *entry;
> +	struct request *rq, *rq_next;
>  
> +	entry = llist_del_all(cpu_list);
> +	entry = llist_reverse_order(entry);

I find the variable naming and split of the assignments a little
strange.  What about:

static void blk_complete_reqs(struct llist_head *list)
{
	struct llist_node *first = llist_reverse_order(llist_del_all(list));
	struct request *rq, *next;

?

> +	llist_for_each_entry_safe(rq, rq_next, entry, ipi_list)
>  		rq->q->mq_ops->complete(rq);
>  }

Aren't some sanitizers going to be unhappy if we never delete the
request from the list?

>  bool blk_mq_complete_request_remote(struct request *rq)
>  {
> +	struct llist_head *cpu_list;
>  	WRITE_ONCE(rq->state, MQ_RQ_COMPLETE);
>  
>  	/*
> @@ -669,12 +634,22 @@ bool blk_mq_complete_request_remote(struct request *rq)
>  		return false;
>  
>  	if (blk_mq_complete_need_ipi(rq)) {
> +		unsigned int cpu;
> +
> +		cpu = rq->mq_ctx->cpu;
> +		cpu_list = &per_cpu(blk_cpu_done, cpu);
> +		if (llist_add(&rq->ipi_list, cpu_list)) {
> +			INIT_CSD(&rq->csd, __blk_mq_complete_request_remote, rq);
> +			smp_call_function_single_async(cpu, &rq->csd);
> +		}

I think the above code section inside the conditional should go into a
little helper instead of being open coded here in the fast path routine.
I laso don't really see the ĥoint of the cpu and cpulist locl variables.

>  	} else {
>  		if (rq->q->nr_hw_queues > 1)
>  			return false;
> +		preempt_disable();
> +		cpu_list = this_cpu_ptr(&blk_cpu_done);
> +		if (llist_add(&rq->ipi_list, cpu_list))
> +			raise_softirq(BLOCK_SOFTIRQ);
> +		preempt_enable();

I think the section after the return false here also would benefit from
a little helper with a descriptive name.

Otherwise this looks good to me.



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux