On Mon, Aug 18, 2014 at 1:48 AM, Jens Axboe <axboe@xxxxxxxxx> wrote: > On 2014-08-16 02:06, Ming Lei wrote: >> >> On 8/16/14, Jens Axboe <axboe@xxxxxxxxx> wrote: >>> >>> On 08/15/2014 10:36 AM, Jens Axboe wrote: >>>> >>>> On 08/15/2014 10:31 AM, Christoph Hellwig wrote: >>>>>> >>>>>> +static void loop_queue_work(struct work_struct *work) >>>>> >>>>> >>>>> Offloading work straight to a workqueue dosn't make much sense >>>>> in the blk-mq model as we'll usually be called from one. If you >>>>> need to avoid the cases where we are called directly a flag for >>>>> the blk-mq code to always schedule a workqueue sounds like a much >>>>> better plan. >>>> >>>> >>>> That's a good point - would clean up this bit, and be pretty close to a >>>> one-liner to support in blk-mq for the drivers that always need blocking >>>> context. >>> >>> >>> Something like this should do the trick - totally untested. But with >>> that, loop would just need to add BLK_MQ_F_WQ_CONTEXT to it's tag set >>> flags and it could always do the work inline from ->queue_rq(). >> >> >> I think it is a good idea. >> >> But for loop, there may be two problems: >> >> - default max_active for bound workqueue is 256, which means several slow >> loop devices might slow down whole block system. With kernel AIO, it won't >> be a big deal, but some block/fs may not support direct I/O and still >> fallback to >> workqueue >> >> - 6. Guidelines of Documentation/workqueue.txt >> If there is dependency among multiple work items used during memory >> reclaim, they should be queued to separate wq each with WQ_MEM_RECLAIM. > > > Both are good points. But I think this mainly means that we should support > this through a potentially per-dispatch queue workqueue, separate from > kblockd. There's no reason blk-mq can't support this with a per-hctx > workqueue, for drivers that need it. Good idea, and per-device workqueue should be enough if BLK_MQ_F_WQ_CONTEXT flag is set. Thanks, -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html