Re: [PATCH V8 07/11] blk-mq: stop to handle IO and drain IO before hctx becomes inactive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Apr 25, 2020 at 05:48:32PM +0200, Christoph Hellwig wrote:
>  		atomic_inc(&data.hctx->nr_active);
>  	}
>  	data.hctx->tags->rqs[rq->tag] = rq;
>  
>  	/*
> +	 * Ensure updates to rq->tag and tags->rqs[] are seen by
> +	 * blk_mq_tags_inflight_rqs.  This pairs with the smp_mb__after_atomic
> +	 * in blk_mq_hctx_notify_offline.  This only matters in case a process
> +	 * gets migrated to another CPU that is not mapped to this hctx.
>  	 */
> +	if (rq->mq_ctx->cpu != get_cpu())
>  		smp_mb();
> +	put_cpu();

This looks exceedingly weird; how do you think you can get to another
CPU and not have an smp_mb() implied in the migration itself? Also, what
stops the migration from happening right after the put_cpu() ?


>  	if (unlikely(test_bit(BLK_MQ_S_INACTIVE, &rq->mq_hctx->state))) {
>  		blk_mq_put_driver_tag(rq);


> +static inline bool blk_mq_last_cpu_in_hctx(unsigned int cpu,
> +		struct blk_mq_hw_ctx *hctx)
>  {
> +	if (!cpumask_test_cpu(cpu, hctx->cpumask))
> +		return false;
> +	if (cpumask_next_and(-1, hctx->cpumask, cpu_online_mask) != cpu)
> +		return false;
> +	if (cpumask_next_and(cpu, hctx->cpumask, cpu_online_mask) < nr_cpu_ids)
> +		return false;
> +	return true;
>  }

Does this want something like:

	lockdep_assert_held(*set->tag_list_lock);

to make sure hctx->cpumask is stable? Those mask ops are not stable vs
concurrenct set/clear at all.




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux