On Thu, 2010-06-03 at 17:17 +0200, Florian Mickler wrote: > On Thu, 03 Jun 2010 09:36:34 -0500 > James Bottomley <James.Bottomley@xxxxxxx> wrote: > > > On Thu, 2010-06-03 at 00:10 -0700, Arve Hjønnevåg wrote: > > > On Wed, Jun 2, 2010 at 10:40 PM, mark gross <640e9920@xxxxxxxxx> wrote: > > > > On Wed, Jun 02, 2010 at 09:54:15PM -0700, Brian Swetland wrote: > > > >> On Wed, Jun 2, 2010 at 8:18 PM, mark gross <640e9920@xxxxxxxxx> wrote: > > > >> > On Wed, Jun 02, 2010 at 02:58:30PM -0700, Arve Hjønnevåg wrote: > > > >> >> > > > >> >> The list is not short. You have all the inactive and active > > > >> >> constraints on the same list. If you change it to a two level list > > > >> >> though, the list of unique values (which is the list you have to walk) > > > >> >> may be short enough for a tree to be overkill. > > > >> > > > > >> > what have you seen in practice from the wake-lock stats? > > > >> > > > > >> > I'm having a hard time seeing where you could get more than just a > > > >> > handfull. However; one could go to a dual list (like the scheduler) and > > > >> > move inactive nodes from an active to inactive list, or we could simply > > > >> > remove them from the list uppon inactivity. which would would well > > > >> > after I change the api to have the client allocate the memory for the > > > >> > nodes... BUT, if your moving things in and out of a list a lot, I'm not > > > >> > sure the break even point where changing the structure helps. > > > >> > > > > >> > We'll need to try it. > > > >> > > > > >> > I think we will almost never see more than 10 list elements. > > > >> > > > > >> > --mgross > > > >> > > > > >> > > > > >> > > > >> I see about 80 (based on the batteryinfo dump) on my Nexus One > > > >> (QSD8250, Android Froyo): > > > > > > > > shucks. > > > > > > > > well I think for a pm_qos class that has boolean dynamic range we can > > > > get away with not walking the list on every request update. we can use > > > > a counter, and the list will be for mostly for stats. > > > > > > > > > > Did you give any thought to my suggestion to only use one entry per > > > unique value on the first level list and then use secondary lists of > > > identical values. That way if you only have two constraints values the > > > list you have to walk when updating a request will never have more > > > than two entries regardless of how many total request you have. > > > > > > A request update then becomes something like this: > > > if on primary list { > > > unlink from primary list > > > if secondary list is not empty > > > get next secondary entry and add in same spot on primary list > > > } > > > unlink from secondary list > > > find new spot on primary list > > > if already there > > > add to secondary list > > > else > > > add to primary list > > > > This is just reinventing hash bucketed lists. To get the benefits, all > > we do is implement an N state constraint as backed by an N bucketed hash > > list, which the kernel already has all the internal mechanics for. > > > > James > > > > http://www.itl.nist.gov/div897/sqg/dads/HTML/priorityque.html > > So no reinvention. Just using a common scheme. By reinvention I meant open coding a common pattern for which the kernel already has an API. (Whether we go with hash buckets or plists). James _______________________________________________ linux-pm mailing list linux-pm@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/linux-pm