On Fri, 13 Jul 2018, Ming Lei wrote: > This patch introduces blk_mq_pm_add_request() which is called after > allocating one request. Also blk_mq_pm_put_request() is introduced > and called after one request is freed. > > For blk-mq, it can be quite expensive to accounting in-flight IOs, > so this patch calls pm_runtime_mark_last_busy() simply after each IO > is done, instead of doing that only after the last in-flight IO is done. > This way is still workable, since the active non-PM IO will be checked > in blk_pre_runtime_suspend(), and runtime suspend will be prevented > if there is any active non-PM IO. > > Turns out that sync between runtime PM and IO path has to be done > for avoiding race, this patch applies one seqlock for this purpose. > So the cost introduced in fast IO path can be minimized given seqlock > is often used in fast path, such as reading jiffies &tick, or d_walk(), > ... > > Cc: "Rafael J. Wysocki" <rjw@xxxxxxxxxxxxx> > Cc: Alan Stern <stern@xxxxxxxxxxxxxxxxxxx> > Cc: linux-pm@xxxxxxxxxxxxxxx > Cc: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> > Cc: Christoph Hellwig <hch@xxxxxx> > Cc: Bart Van Assche <bart.vanassche@xxxxxxx> > Cc: Hannes Reinecke <hare@xxxxxxx> > Cc: Johannes Thumshirn <jthumshirn@xxxxxxx> > Cc: Adrian Hunter <adrian.hunter@xxxxxxxxx> > Cc: "James E.J. Bottomley" <jejb@xxxxxxxxxxxxxxxxxx> > Cc: "Martin K. Petersen" <martin.petersen@xxxxxxxxxx> > Cc: linux-scsi@xxxxxxxxxxxxxxx > Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx> > --- > +static void blk_mq_post_runtime_suspend(struct request_queue *q, int err) > +{ > + if (!blk_mq_support_runtime_pm(q)) > + return; > + > + write_seqlock_irq(&q->rpm_lock); > + __blk_post_runtime_suspend(q, err); > + write_sequnlock_irq(&q->rpm_lock); > +} > + > +static void blk_mq_pre_runtime_resume(struct request_queue *q) > +{ > + if (!blk_mq_support_runtime_pm(q)) > + return; > + > + write_seqlock_irq(&q->rpm_lock); > + q->rpm_status = RPM_RESUMING; > + write_sequnlock_irq(&q->rpm_lock); > +} > + > +static void blk_mq_post_runtime_resume(struct request_queue *q, int err) > +{ > + if (!blk_mq_support_runtime_pm(q)) > + return; > + > + write_seqlock_irq(&q->rpm_lock); > + __blk_post_runtime_resume(q, err); > + write_sequnlock_irq(&q->rpm_lock); > +} > + > +static void blk_mq_set_runtime_active(struct request_queue *q) > +{ > + if (!blk_mq_support_runtime_pm(q)) > + return; > + > + write_seqlock_irq(&q->rpm_lock); > + __blk_set_runtime_active(q); > + write_sequnlock_irq(&q->rpm_lock); > +} Would the code be cleaner if these routines were written inline, like their non-mq counterparts? Alan Stern