On Fri, Nov 08, 2019 at 10:12:49AM -0800, Stephen Boyd wrote: > Quoting Thierry Reding (2019-11-08 02:11:16) > > On Thu, Nov 07, 2019 at 11:19:32AM -0800, Stephen Boyd wrote: > > > Quoting Thierry Reding (2019-11-07 07:21:15) > > > > On Thu, Nov 07, 2019 at 03:54:03AM +0300, Dmitry Osipenko wrote: > > > > > 07.11.2019 02:10, Stephen Boyd пишет: > > > > > > Quoting Sowjanya Komatineni (2019-08-16 12:41:52) > > > > > >> This patch adds an API clk_hw_get_parent_index to get index of the > > > > > >> clock parent to use during the clock restore operations on system > > > > > >> resume. > > > > > > > > > > > > Is there a reason we can't save the clk hw index at suspend time by > > > > > > reading the hardware to understand the current parent? The parent index > > > > > > typically doesn't matter unless we're trying to communicate something > > > > > > from the framework to the provider driver. Put another way, I would > > > > > > think the provider driver can figure out the index itself without having > > > > > > to go through the framework to do so. > > > > > > > > > > Isn't it a bit wasteful to duplicate information about the parent within > > > > > a provider if framework already has that info? The whole point of this > > > > > new API is to allow providers to avoid that unnecessary duplication. > > > > > > > > > > Please note that clk_hw_get_parent_index is getting used only at the > > > > > resume time and not at suspend. > > > > > > > > I agree with this. All of the information that we need is already cached > > > > in the framework. Doing this in the driver would mean essentially adding > > > > a "saved parent" field along with code to read the value at suspend time > > > > to the three types of clocks that currently use this core helper. > > > > > > Don't we already have a "saved parent" field by storing the pointer to > > > the clk_hw? > > > > > > > > > > > That's certainly something that we *can* do, but it doesn't sound like a > > > > better option than simply querying the framework for the value that we > > > > need. > > > > > > > > > > Let me say this another way. Why does this driver want to know the index > > > that the framework uses for some clk_hw pointer? Perhaps it happens to > > > align with the same value that hardware uses, but I still don't > > > understand why the driver wants to know what the framework has decided > > > is the index for some clk_hw pointer. > > > > > > Or is this something like "give me the index for the parent that the > > > framework thinks I currently have but in reality don't have anymore > > > because the register contents were wiped and we need to reparent it"? > > > > Yeah, that's exactly what this is being used for. It's used to restore > > the parent/child relationship during resume after the registers have > > been wiped during supend. > > Ok cool. Our whole suspend/resume and save/restore story hasn't really > been well thought out so we may want to pull all this logic into the > core one day. For now it's OK to do the heavy lifting from provider > drivers until someone gets a better grasp on how this should all work. Ah, that would explain why I was scratching my head trying to understand how exactly this was supposed to work. It did feel like there was some infrastructure there, but looking around there wasn't a very consistent usage pattern that I could find. I think suspend/resume is always a little tricky. For example the clocks may required a slightly different logical sequences between SoCs. Maybe even different types of clocks have different needs. We seem to have a bit of that on Tegra alone already. Without having delved into this too much, it seems to me like the core can't do a whole lot without stepping (potentially) on drivers' toes. The current save_context/restore_context seems to be mostly that, though, so I think it's a good starting point. You're right that we may eventually see clearer patterns appear. > > > A generic API to get any index for this question is overkill and we should > > > consider adding some sort of API like clk_hw_get_current_parent_index(), > > > or a framework flag that tells the framework this parent is incorrect > > > and we need to call the .set_parent() op again to reconfigure it. > > > > Okay, I think I see what you're saying. The current implementation does > > carry a bit of a risk because users could be calling this function with > > any arbitrary pair of struct clk_hw *, even completely unrelated ones. > > > > How about we turn it into this instead: > > > > /** > > * clk_hw_get_parent_index - return the index of the parent clock > > * @hw: clk_hw associated with the clk being consumed > > * > > * Fetches and returns the index of parent clock. Returns -EINVAL if the given > > * clock does not have a current parent. > > */ > > int clk_hw_get_parent_index(struct clk_hw *hw) > > { > > struct clk_hw *parent = clk_hw_get_parent(hw); > > > > if (!parent) > > return -EINVAL; > > > > return clk_fetch_parent_index(hw->core, parent->core); > > } > > EXPORT_SYMBOL_GPL(clk_hw_get_parent_index); > > > > I think that has the advantage that we can't pass it a parent that's not > > really a parent. There's still the slightly weird case where the clock > > doesn't have a current parent, but hopefully that's something we are not > > going to encounter much. After all this only makes sense to be called on > > mux clocks and they always do have a parent by definition. > > Right. > > > > > Perhaps we should be more explicit and wrap that !parent conditional in > > a WARN_ON()? In my local patches I do that at the call sites because > > they are all functions returning void, so we'd be silently ignoring the > > cases, but I think it may make sense to have it in the core. > > > > Sure a WARN_ON() sounds fair. That will not take the whole task down > and makes sure that drivers aren't doing something incorrect. Otherwise, > this looks good and we can optimize by caching the parent index later if > we really need to. Okay, great. I'll go replace the above patch in the branch that I have. I'm not sure if you saw it, but I had sent this in a pull request for v5.5-rc1 about a week ago because I've got Tegra clock driver patches that depend on this. I can replace this patch with the above proposal and update the Tegra clock driver branch and then resend the two pull requests. Does that sound like a plan? Thierry
Attachment:
signature.asc
Description: PGP signature