. I'm working on an implementation of the ARM clock API for the Freescale i.MX1 (which is very crufty right now and only works on Zodiacs so there's not much point in asking for it (unless you want to critique it; I could use the feedback)), and I find myself wondering where the default clock settings should come from. Looking at the clock API for the TI OMAP (arch/arm/mach-omap/clock.c), I see a table of available clock frequencies. The init code finds the fastest one and sets it. Then it clk_use()s three other clocks. Doesn't that constitute a policy decision? If so, where should this policy reside? The reason I ask is because my first crack at a clock driver implementation had the clocks being disabled/spun down until a client showed up. This seems fine until you discover that the serial console gets initialized *way* before the clock and serial drivers; that the UART clocks are effectively already in use, which in turn depend on PERCLK1, which in turn depends on SYSPLL. OTOH, if you don't have the serial console configured, then you don't have to worry about any of that. All of which is a long-winded way of observing that the initial state of the clocks appears to this novice's eye to be policy-dependent. In the case of the serial console, the policy is encoded in the kernel config and can be easily, if hackishly, handled. Handling of other policies is less obvious (sticking them all in the kernel config seems wrong). I would prefer to do The Right Thing here. Can anyone offer insights on this issue? Schwab