On 30/05/13 19:36, Kevin Hilman wrote: > Tomi Valkeinen <tomi.valkeinen@xxxxxx> writes: > >> When using DT, dss device does not have platform data. However, >> dss_get_ctx_loss_count() uses dss device's platform data to find the >> get_ctx_loss_count function pointer. >> >> To fix this, dss_get_ctx_loss_count() needs to be changed to get the >> platform data from the omapdss device, which is a "virtual" device and >> always has platform data. > > Dumb question (since the DSS device model has always been beyond my > grasp): how/why does the virtual device still have platform_data on a DT > boot? The virtual device will be created in generic omap arch code for DT boot also. The omapdss virtual device is an old relic. Originally we only had the omapdss device, not the sub-devices for HW block like we have now. After adding the sub-devices, omapdss device was still left, as it provided useful functionality. It wasn't strictly needed anymore, but I've just never had time to start refactoring code to remove it. Now, with DT, it was just an easy way around to pass the func pointers. Note that we also pass set_min_bus_tput, and a version identifier for the DSS HW. The PM funcs could perhaps be handled some other way, but I'm not sure how I could pass the DSS HW version. > Also, this context_loss_count and DT boot is starting to get cumbersome, > and there's currently no (good) way of handling the context loss in a > generic way without pdata function pointers. > > I'm starting to think we should move towards dropping this OMAP-specific > context loss counting, and just assume context is always lost. If there > are performance problems, runtime PM autosuspend timeouts can be used to > avoid the transitions. > > Or, on some devices, I suspect comparing a few registers against their > power-on reset values might be a quicker way of detecting lost context > and would avoid having to call into platform code all together. Hmm, true. I'll look at this. Maybe I won't need get_ctx_loss in omapdss at all. How about omap_pm_set_min_bus_tput(), can that be handled in some generic manner? Although, we currently use that in a bit hacky case. What we're doing is not really setting min bus tput, but we're just preventing OPP50. The DSS clocks have different maximums on OPP50 than on OPP100, and we need to keep the core in OPP100 to keep DSS working. So what I'd rather use is "prevent_opp50()" than setting arbitrarily high min bus tput. Tomi
Attachment:
signature.asc
Description: OpenPGP digital signature