On Wed, May 23, 2012 at 10:58 PM, Alan Stern <stern@xxxxxxxxxxxxxxxxxxx> wrote: > On Wed, 23 May 2012, Lin Ming wrote: > >> Let's consider below code. >> >> @@ -587,6 +591,11 @@ void __elv_add_request(struct request_queue *q, >> struct request *rq, int where) >> { >> trace_block_rq_insert(q, rq); >> >> + if (!(rq->cmd_flags & REQ_PM)) >> + if (q->nr_pending++ == 0 && (q->rpm_status == RPM_SUSPENDED || >> + q->rpm_status == RPM_SUSPENDING) && q->dev) >> + pm_request_resume(q->dev); >> + >> rq->q = q; >> >> if (rq->cmd_flags & REQ_SOFTBARRIER) { >> >> Block layer reads runtime status and pm core writes this status. >> PM core uses dev->power.lock to protect this status. >> >> I was thinking will it have problem if block layer does not acquire >> dev->power.lock? >> From your explanation below, it seems does not have problem. > > I don't think it's a problem, because all you're doing is reading > dev->power.rpm_status -- you're not writing it. > > On the other hand, there's nothing really wrong with keeping your own > local copy of rpm_status. You could think of it as being the queue's > status as opposed to the device's status. (Also, some people might > argue that dev->power.rpm_status is supposed to be private to the > runtime PM core and shouldn't be used by other code.) Agree. So I'd like to keep local copy of rpm_status. -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html