Tomi Valkeinen <tomi.valkeinen@xxxxxx> writes: > On Wed, 2011-05-18 at 12:50 +0200, Kevin Hilman wrote: >> Tomi Valkeinen <tomi.valkeinen@xxxxxx> writes: >> >> > Hi Kevin, >> > >> > I was fixing DSS context loss handling which is a bit broken, and while >> > testing on OMAP3 Overo, with -rc7 and omap2plus_defconfig, I noticed >> > that get_context_loss_count() seems to always return 0. >> > >> > 0 should be returned when an error happens, and as far as I see in >> > pwrdm_get_context_loss_count(), no error is happening but the DSS >> > context has just never been lost and the returned count is thus 0. >> > >> > Is this correct? And what happens when the count wraps and goes back to >> > zero, does the function return 0 in that case? >> >> Hmm, you're right. zero is actually documented as the error return >> value (even though it's not really checked.) >> >> Since driver's should only every care about the *difference* in value >> between two calls to context_loss_count(), this might not be a big deal, >> but a proper fix is probably to have the state counters start at one. > > But if there happens an error in get_context_loss_count(), for whatever > reason, I'd guess it's safer from the driver's perspective to assume > that a context restore _is_ needed. If the driver handles zero value as > a normal return value, it would mean that the driver never restores > context if get_context_loss_count() returns 0 for all calls. Looking closer at the code, a zero return happens only when 1) no hwmod associated to omap_device 2) no power domain associated to hwmod 3) power domain has not (yet) lost context None of these are actually error conditions per-se, and in all cases, it indidates that context has not been lost (or we can't tell if context has been lost.) So I think the current code is correct. Are you finding a case where HW context has actually been lost and the powerdomain context loss is still at zero? Kevin -- To unsubscribe from this list: send the line "unsubscribe linux-omap" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html