Re: [PATCH v6 33/52] memory: tegra20: Support interconnect framework

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



27.10.2020 17:11, Thierry Reding пишет:
...
>> +static int emc_icc_set(struct icc_node *src, struct icc_node *dst)
>> +{
>> +	struct tegra_emc *emc = to_tegra_emc_provider(dst->provider);
>> +	unsigned long long peak_bw = icc_units_to_bps(dst->peak_bw);
>> +	unsigned long long avg_bw = icc_units_to_bps(dst->avg_bw);
>> +	unsigned long long rate = max(avg_bw, peak_bw);
>> +	unsigned int dram_data_bus_width_bytes = 4;
> 
> Perhaps use something shorter for this variable (like dram_bus_width)? Also,
> since it's never modified, perhaps make it const? Or a #define?

It actually could be 2, depending on a board configuration, but I don't
know whether a 16bit bus was ever used in a wild. AFAIK, nv-tegra
kernels assumes 32bit bus for all devices.

...
>> +err_msg:
>> +	dev_err(emc->dev, "failed to initialize ICC: %d\n", err);
> 
> It might be worth duplicating this error message to the failure
> locations so that the exact failure can be identified.

I think it should be better to extend error messages on by as-needed
basis. It's very unlikely that we will ever see this error in practice.
Okay?

...
>> +	 * of the client's FIFO buffers. Secondly, we need to take into
>> +	 * account impurities of the memory subsystem.
>> +	 */
>> +	if (tag == TEGRA_MC_ICC_TAG_ISO)
>> +		peak_bw = tegra_mc_scale_percents(peak_bw, 300);
> 
> 300% sounds a bit excessive. Do we really need that much?

It should be possible to drop it to 150% by tuning priority timers and
hysteresis of the clients, but some of those configurations are placed
within device registers range and we will need a more complicated
bandwidth manager.

The 300% is an overestimation, but it's better to overestimate for the
starter than have an unusable devices. This is what nv-tegra kernel does
as well, btw.

>> +
>> +	*agg_avg += avg_bw;
>> +	*agg_peak = max(*agg_peak, peak_bw);
> 
> I'm not very familiar with ICC, but shouldn't the aggregated peak value
> be the sum of the current aggregated peak and the new peak bandwidth?
> Currently you're selecting the maximum peak bandwidth across all
> clients, so isn't that going to be too small if for whatever reason
> multiple clients need peak bandwidth at the same time?

It's up to the platform drivers to decide how to interpret and use the
avg and peak values.

Please see the above emc_icc_set() which selects max of (avg, peak)
values, but maybe it also should be good to move it out from ICC set()
to the ICC aggregate() callback:

*agg_peak = max(*agg_peak, *agg_avg);

I'll need to take a closer look.



[Index of Archives]     [ARM Kernel]     [Linux ARM]     [Linux ARM MSM]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux