On 08/15/2014 10:36 AM, Jens Axboe wrote: > On 08/15/2014 10:31 AM, Christoph Hellwig wrote: >>> +static void loop_queue_work(struct work_struct *work) >> >> Offloading work straight to a workqueue dosn't make much sense >> in the blk-mq model as we'll usually be called from one. If you >> need to avoid the cases where we are called directly a flag for >> the blk-mq code to always schedule a workqueue sounds like a much >> better plan. > > That's a good point - would clean up this bit, and be pretty close to a > one-liner to support in blk-mq for the drivers that always need blocking > context. Something like this should do the trick - totally untested. But with that, loop would just need to add BLK_MQ_F_WQ_CONTEXT to it's tag set flags and it could always do the work inline from ->queue_rq(). -- Jens Axboe
diff --git a/block/blk-mq.c b/block/blk-mq.c index 5189cb1e478a..a97eb9a4af60 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -803,6 +803,9 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async) if (unlikely(test_bit(BLK_MQ_S_STOPPED, &hctx->state))) return; + if (hctx->flags & BLK_MQ_F_WQ_CONTEXT) + async = true; + if (!async && cpumask_test_cpu(smp_processor_id(), hctx->cpumask)) __blk_mq_run_hw_queue(hctx); else if (hctx->queue->nr_hw_queues == 1) @@ -1173,7 +1176,7 @@ static void blk_mq_make_request(struct request_queue *q, struct bio *bio) goto run_queue; } - if (is_sync) { + if (is_sync && !(data.hctx->flags & BLK_MQ_F_WQ_CONTEXT)) { int ret; blk_mq_bio_to_request(rq, bio); diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index eb726b9c5762..c7a8c5fdd380 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -127,7 +127,8 @@ enum { BLK_MQ_RQ_QUEUE_ERROR = 2, /* end IO with error */ BLK_MQ_F_SHOULD_MERGE = 1 << 0, - BLK_MQ_F_SHOULD_SORT = 1 << 1, + BLK_MQ_F_WQ_CONTEXT = 1 << 1, /* ->queue_rq() must run from + * a blocking context */ BLK_MQ_F_TAG_SHARED = 1 << 2, BLK_MQ_F_SG_MERGE = 1 << 3, BLK_MQ_F_SYSFS_UP = 1 << 4,