Hi Chris, On Tue, Mar 27, 2012 at 12:20 PM, Chris Ball <cjb@xxxxxxxxxx> wrote: > Hi, > > On Mon, Mar 26 2012, Magnus Damm wrote: >>> Do you have a feeling of if it it worth trying to start with a value close >>> to 400kHz or if it would be better to simplify the code? I can try and >>> measure the difference in start up time for particular hardware >>> combinations, but I'm not sure how far that will get us. >> >> I believe the correct way is to program the hardware to be as close to >> 400 kHz as possible. I may be wrong, but I guess that slower than 400 >> kHz is also acceptable during the initial phase, but faster may mean >> out of spec. For optimal performance the code may need to be reworked >> to support both correct and slow 400 kHz as well as whatever high >> frequencies needed for fast transfers. > > Hm, I think I'm missing something -- you shouldn't need to optimize f_min > in the driver at all, because the core handles retrying at lower frequencies > in the init phase (before switching to the higher frequency that comes from > the CSD) and it always begins at 400KHz if that's above f_min. In core.c: > > void mmc_rescan(struct work_struct *work) > { > static const unsigned freqs[] = { 400000, 300000, 200000, 100000 }; > ... > for (i = 0; i < ARRAY_SIZE(freqs); i++) { > if (!mmc_rescan_try_freq(host, max(freqs[i], host->f_min))) > ... > > So, why would you want f_min to be near 400KHz? First of all, sorry to cause confusion. It is most likely me missing something. =) The reason we want f_min to be near 400 kHz is that this moves our window of allowed frequencies as high as possible. We could go lower than 400 kHz but due to hardware limitations that may also lower f_max. So thanks to your pointer I can see that a range of frequencies are being tried in mmc_rescan(). I can see how f_init is being set in mmc_rescan_try_freq() and the value is being passed down in mmc_power_up() which in turn sets ios.clock to f_init and calls mmc_set_ios(). This regardless of f_min in this case - perhaps worth comparing to the WARN_ON() comparison with f_min in __mmc_set_clock(). I know too little about the MMC subsystem to say anything about that. Anyway, I was mainly wondering about the setup of the clocks in the driver for the MMC host hardware. I got the impression that f_min and f_max are supposed to be used to point out the minimum and maximum allowed frequencies that the hardware supports. On our SoCs this depends on the rate of the parent clock and the number of bits used for the MMC host hardware divider. So the parent clock rate may be rather high, and if it happens to be set too high on a board level then the MMC hardware block won't be able to program the frequencies as low as 400 kHz. So my point is only that we need to make sure that the parent clock rate is set in such a way that we can have some at least half-high clock frequency for decent general purpose throughput but if we try to increase the clock rate too much then we may force the lowest clock rate to go above 400 kHz and then I suspect we may be out of spec. All this is based on that the parent clock is set statically to some frequency during boot by the SoC or board code. Some device drivers may set the parent clock rate in ->set_ios() but we do not at this point. We keep the parent rate fixed and just adjust the frequency in the range that the on-MMC-controller divider allows us to do without affecting the rest of the system. The code could be extended to also set the parent clock but due to clock dependencies related to the SoC clock topology we may not be able to change those frequencies during runtime. Anyway, I hope this clarifies how I see things! Thanks, / magnus -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html