On 3/24/21 8:19 PM, Ming Lei wrote: > Add one req flag REQ_POLL_CTX which will be used in the following patch for > supporting bio based IO polling. > > Exactly this flag can help us to do: > > 1) request flag is cloned in bio_fast_clone(), so if we mark one FS bio > as REQ_POLL_CTX, all bios cloned from this FS bio will be marked as > REQ_POLL_CTX too. > > 2) create per-task io polling context if the bio based queue supports > polling and the submitted bio is HIPRI. Per-task io poll context will be > created during submit_bio() before marking this HIPRI bio as REQ_POLL_CTX. > Then we can avoid to create such io polling context if one cloned bio with > REQ_POLL_CTX is submitted from another kernel context. > > 3) for supporting bio based io polling, we need to poll IOs from all > underlying queues of the bio device, this way help us to recognize which > IO needs to polled in bio based style, which will be applied in > following patch. > > Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx> Reviewed-by: Jeffle Xu <jefflexu@xxxxxxxxxxxxxxxxx> > --- > block/blk-core.c | 25 ++++++++++++++++++++++++- > include/linux/blk_types.h | 4 ++++ > 2 files changed, 28 insertions(+), 1 deletion(-) > > diff --git a/block/blk-core.c b/block/blk-core.c > index 4671bbf31fd3..eb07d61cfdc2 100644 > --- a/block/blk-core.c > +++ b/block/blk-core.c > @@ -840,11 +840,30 @@ static inline bool blk_queue_support_bio_poll(struct request_queue *q) > static inline void blk_bio_poll_preprocess(struct request_queue *q, > struct bio *bio) > { > + bool mq; > + > if (!(bio->bi_opf & REQ_HIPRI)) > return; > > - if (!blk_queue_poll(q) || (!queue_is_mq(q) && !blk_get_bio_poll_ctx())) > + /* > + * Can't support bio based IO polling without per-task poll ctx > + * > + * We have created per-task io poll context, and mark this > + * bio as REQ_POLL_CTX, so: 1) if any cloned bio from this bio is > + * submitted from another kernel context, we won't create bio > + * poll context for it, and that bio can be completed by IRQ; > + * 2) If such bio is submitted from current context, we will > + * complete it via blk_poll(); 3) If driver knows that one > + * underlying bio allocated from driver is for FS bio, meantime > + * it is submitted in current context, driver can mark such bio > + * as REQ_HIPRI & REQ_POLL_CTX manually, so the bio can be completed > + * via blk_poll too. > + */ > + mq = queue_is_mq(q); > + if (!blk_queue_poll(q) || (!mq && !blk_get_bio_poll_ctx())) > bio->bi_opf &= ~REQ_HIPRI; > + else if (!mq) > + bio->bi_opf |= REQ_POLL_CTX; > } > > static noinline_for_stack bool submit_bio_checks(struct bio *bio) > @@ -894,8 +913,12 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio) > /* > * Create per-task io poll ctx if bio polling supported and HIPRI > * set. > + * > + * If REQ_POLL_CTX isn't set for this HIPRI bio, we think it originated > + * from FS and allocate io polling context. > */ > blk_create_io_context(q, blk_queue_support_bio_poll(q) && > + !(bio->bi_opf & REQ_POLL_CTX) && > (bio->bi_opf & REQ_HIPRI)); > > blk_bio_poll_preprocess(q, bio); > diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h > index db026b6ec15a..99160d588c2d 100644 > --- a/include/linux/blk_types.h > +++ b/include/linux/blk_types.h > @@ -394,6 +394,9 @@ enum req_flag_bits { > > __REQ_HIPRI, > > + /* for marking IOs originated from same FS bio in same context */ > + __REQ_POLL_CTX, > + > /* for driver use */ > __REQ_DRV, > __REQ_SWAP, /* swapping request. */ > @@ -418,6 +421,7 @@ enum req_flag_bits { > > #define REQ_NOUNMAP (1ULL << __REQ_NOUNMAP) > #define REQ_HIPRI (1ULL << __REQ_HIPRI) > +#define REQ_POLL_CTX (1ULL << __REQ_POLL_CTX) > > #define REQ_DRV (1ULL << __REQ_DRV) > #define REQ_SWAP (1ULL << __REQ_SWAP) > -- Thanks, Jeffle