On Mon, 1 Aug 2005, Patrick Mochel wrote: > Good summary, though you're missing a few key points and issues. Details > below, but let's begin with this. Actually I wasn't trying to summarize the Ottawa discussion, but to forge ahead by starting to consider implementations. That's why I didn't mention a lot of the things you bring up here. > We identified three facets of Runtime Power Management: > > - Automatic (or Dynamic) Suspend ... > - Directed Suspend ... > - Performance ... All important matters, but not what I was concentrating on. > To address your points directly: > > On Sat, 30 Jul 2005, Alan Stern wrote: > > We have also agreed that runtime power state changes need to bubble up > > the device tree. To handle this, drivers for interior nodes in the > > tree can define "link states". > > Ah, shame on you - a USBism. Not really -- the phrase "link state" isn't used in the USB specification as far as I know. It's just a simple term I came up with to describe everything a parent needs to know about the power requirements of a child. As such it's not specific to any particular bus or technology. > Unfortunately, it doesn't make much sense to > those not familiar with the context. I _did_ give a definition, so hopefully it makes sense in the context of my writeup. > > When a driver changes a device's state, it will notify all of the > > power parents about link state changes while doing so. > > > These notifications are one-way, child-to-parent only. We don't need > > pre- and post-notifications; each message will inform the parent of a > > single link-state change, which the parent will then carry out. > > This may not be true. There could be a race between children notifying > their parent of state changes if e.g. the states of two children are > opposite and in the process of inverting (there is one suspended one that > tries to resume, and one resumed device that tries to suspend). I agree that a parent may have to cope with situations where two children are trying to change state at the same time. struct device->semaphore should help there. This doesn't affect what I wrote, however. Link-state changes don't involve races, because a link state describes the connection between one specific parent and one specific child. If two different links change state at the same time, that's not a race. > I haven't thought this all the way through, so it might just represent a > glitch in performance rather than a potential error condition. Regardless, > it's an implementation detail that I don't want to get bogged down with > right now. Agreed. But the locking issue is a very important one, which needs to be settled before we can write any code. > > Order of locking: The general rule, needed to prevent deadlock, for > > acquiring the device semaphores is this: Don't lock a device if you > > already hold one of its descendants' locks. > > So is this. (Out of context, sorry -- you were saying that the order of locking is an important design consideration.) > These are limitations to what we have now, but it doesn't have to be that > way. Ideally, we will have a solution that works flawlessly under the > current constraints. Otherwise, we should (very carefully) examine ways in > which we can adjust the current Core to better handle what we need to do. > I don't want to redesign the Driver Core on a whim; only under specific > requirements. Me neither. In fact, I would go so far as to say that this is the main impediment to RTPM implementations at the moment. If I knew the answer to the locking-order problem, I could fix up the USB RTPM code right now. > > Idle-timeout RTPM: We certainly should have an API whereby userspace > > can inform the kernel of an idle-timeout value to use for > > autosuspending. (In principle there could be multiple timeout values, > > for successively deeper levels of power saving.) This cries out to be > > managed in a centralized way rather than letting each driver have its > > own API. It's not so clear what the most efficient implementation > > will be. Should every device have its own idle-timeout kernel timer? > > (That's a lot of kernel timers.) Or should the RTPM kernel thread > > wake up every second to scan a list of devices that may have exceeded > > their idle timeouts? > > This is definitely a design consideration. However, I don't want to design > any infrastructure like this now because we don't know how people are > going to use it. We want some early adopters to start working with the > concepts and implementing their own versions. Once we get an idea of how > to abstract it, maybe we can start providing some library functions. Having a few ad-hoc designs to start with is fine. We _will_ want to unify them before there are too many. Thanks for the comments, Alan Stern