Hi Bjorn, On 12/17/2016 09:11 AM, Bjorn Andersson wrote: > On Fri 16 Dec 16:56 PST 2016, Stephen Boyd wrote: > >> On 12/12, Bjorn Andersson wrote: >>> On Wed 30 Nov 01:22 PST 2016, Stanimir Varbanov wrote: >>> [..] >>> >>>> I think using power domain in remoteproc driver will break power >>>> management of the v4l2 venus driver. Presently I use runtime pm to >>>> switch ON/OFF GDSC and enable/disable Venus clocks. >>>> >>>> When the .probe of venus remoteproc driver is called the power domain >>>> will become ON and never switched OFF (except when platform driver >>>> .remove method is called). One possible solution would be to add runtime >>>> pm in venus remoreproc driver. >>>> >>>> IMO we should think more about that. >>>> >>>> Bjorn, Stephen what are your opinions? >>>> >>> >>> Sorry for not getting back to you on this after our earlier discussion >>> on the topic. >>> >>> Modelling the remoteproc and codec drivers as completely separate >>> entities (with only the rproc phandle) gives us issues with the timing >>> between the two. As I pulled down and started playing with [1] I noticed >>> that I was not able to get the codec driver to probe unless I compiled >>> it as a module which I insmoded after my /lib/firmware became available. >>> >>> In addition to this, there's no way for the remoteproc driver to inform >>> the codec driver when it's actually shutting down and booting back up >>> during a crash recovery (an async operation). >>> >>> And lastly when I talked to Rob about the DT binding the obvious >>> question of why one piece of hardware have two disjunct nodes in the DT. >>> >>> >>> So I believe we should represent the codec driver as a child of the >>> remoteproc driver, utilizing the newly introduced "remoteproc >>> subdevices" to probe and remove the codec driver based on the state of >>> the remoteproc driver. This relationship will also allow us to tie >>> certain resources (e.g. the clocks and power-domain) to the remoteproc >>> driver and use pm_runtime in either driver to make sure the resources >>> are enabled appropriately. >>> >>> I did backport some patches from my v4.10 remoteproc pull request into >>> [2] and merged this into [1] and made a prototype of this. You can find >>> it here [3], I did test that I can boot the remoteproc and run >>> v4l2-compliance, shut it down, boot it back up and re-run >>> v4l2-compliance, and it seems to work fine. >>> >> >> From a DT perspective this seems backwards. We shouldn't be >> putting something with a reg property underneath a "virtual" node >> that doesn't have a reg property. It's already causing pain, >> evident by these sorts of patches, so something seems wrong. >> > > I agree. > >> Are we gaining anything by having a remoteproc driver and device >> for the video hardware here? > > After discussing the matter with Avaneesh I realized that the downstream > driver powers the venus core up and down based on the presence of > clients. I expect that this means we have to do the same to be able to > meet our power KPIs. > > With this in mind the remoteproc driver becomes a wrapper for a mdt > loader and the scm pas interface - and brings with it a bunch of other > things, expectations and challenges. > >> Why can't the video driver load/unload firmware with the mdt loader >> code by itself when it wants to boot the processor? > > Providing a saner api around the mdt-loader and pas would allow venus > and Adreno to reuse the code, without the remoteproc bloat. Does it make sense if we move mdt parser in drivers/firmware ? -- regards, Stan -- To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html