On Wednesday, October 27, 2010, Mathieu Desnoyers wrote: > * Rafael J. Wysocki (rjw@xxxxxxx) wrote: ... > > Hrm, then why export pm_runtime_get_noresume() at all ? Basically, the PM core needs it for some obscure stuff. Beyond that people really should use it with care (preferably avoid using it at all). > I don't feel comfortable with some of the pm_runtime_get_noresume() callers. > > > > > So if you don't want to resume the device immediately after increasing its > > usage_count (in which case it's better to use pm_runtime_get_sync()), you > > should do something like this: > > > > 1) pm_runtime_get_noresume(dev); > > 1a) pm_runtime_barrier(dev); // That takes care of all pending requests etc. > > > > 2) ... > > > > 3) pm_runtime_put_noidle(dev); > > > > [The meaning of pm_runtime_barrier() is that all of the runtime PM activity > > started before the barrier has been completed when it returns.] > > > > There's one place in the PM core where that really is necessary, but I wouldn't > > recommend anyone doing anything like it in a driver. > > grep -r pm_runtime_get_noresume drivers/ hands out very interesting info. > > e.g.: > > drivers/usb/core/drivers.c: usb_autopm_get_interface_async() > > pm_runtime_get_noresume(&intf->dev); > s = ACCESS_ONCE(intf->dev.power.runtime_status); > if (s == RPM_SUSPENDING || s == RPM_SUSPENDED) > status = pm_request_resume(&intf->dev); > > How is this supposed to work ? > > If the ACCESS_ONCE can be reordered before the atomic_inc(), then I fear the > device can be suspended even after the check. > > My point is that a get/put semantic should imply memory barriers, especially if > these are exported APIs. Well, IMO adding a memory barrier to pm_runtime_get_noresume() wouldn't really change a lot (it still would be racy with respect to some other runtime PM helper funtions). That said I guess we should put a "handle with care" sticker on it. Thanks, Rafael -- To unsubscribe from this list: send the line "unsubscribe linux-trace-users" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html