Paul Walmsley <paul@xxxxxxxxx> writes: > Hi, > > On Tue, 7 Dec 2010, Chikkature Rajashekar, Madhusudhan wrote: > >> On Tue, Dec 7, 2010 at 1:51 PM, Adrian Hunter <adrian.hunter@xxxxxxxxx> wrote: >> > >> > It is at least because omap_pm_get_dev_context_loss_count() is not >> > implemented. ÂTero Kristo was looking at that recently. >> > >> >> Yes. I agree that is the problem. In the .32 kernel I had hooked it to >> "get_last_off_on_transaction_id" which helped. >> But that functionality does not exist anymore. So something equivalent >> to tell the driver when the OFF was hit will make it work. > > OK, let's see if we can get that fixed in at least some trivial > way for 2.6.38. While working on this, I applied this trivial patch: > > diff --git a/arch/arm/plat-omap/omap-pm-noop.c b/arch/arm/plat-omap/omap-pm-noop.c > index e129ce8..781aa5f 100644 > --- a/arch/arm/plat-omap/omap-pm-noop.c > +++ b/arch/arm/plat-omap/omap-pm-noop.c > @@ -30,6 +30,8 @@ struct omap_opp *dsp_opps; > struct omap_opp *mpu_opps; > struct omap_opp *l3_opps; > > +static int dummy_context_loss_counter; > + > /* > * Device-driver-originated constraints (via board-*.c files) > */ > @@ -303,7 +305,7 @@ int omap_pm_get_dev_context_loss_count(struct device *dev) > * off counter. > */ > > - return 0; > + return dummy_context_loss_counter++; > } > > > > ... which causes drivers to believe that device context has been lost > after each call to omap_pm_get_dev_context_loss_count(). Brutal, but > effective for chasing out context save/restore bugs. Tested-by: Kevin Hilman <khilman@xxxxxxxxxxxxxxxxxxx> I verified that this, in combination with your other patch[1] results in working off-suspend with MMC on 34xx/n900, 35xx/beagle and 36xx/zoom3. Paul, do you want to submit a formal patch for this for 2.6.38? if not, I can add a changelog and queue this with other PM core changes for 2.6.38. Kevin [1] MMC: omap_hsmmc: enable interface clock before calling mmc_host_enable() -- To unsubscribe from this list: send the line "unsubscribe linux-omap" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html