Re: [RFC 2/2] dt-bindings: firmware: tegra186-bpmp: Document interconnects property

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



27.01.2020 15:49, Thierry Reding пишет:
> On Mon, Jan 27, 2020 at 12:56:24AM +0300, Dmitry Osipenko wrote:
> [...]
>> Thinking a bit more about how to define the ICC, I'm now leaning to a
>> variant like this:
>>
>> interconnects =
>>     <&mc TEGRA186_MEMORY_CLIENT_BPMP &emc TEGRA_ICC_EMEM>,
>>     <&mc TEGRA186_MEMORY_CLIENT_BPMPR>,
>>     <&mc TEGRA186_MEMORY_CLIENT_BPMPW>,
>>     <&mc TEGRA186_MEMORY_CLIENT_BPMPDMAR>,
>>     <&mc TEGRA186_MEMORY_CLIENT_BPMPDMAW>;
>>
>> interconnect-names = "dma-mem", "read", "write", "dma-read", "dma-write";
>>
>> Looks like there is a problem with having a full MC-EMEM path being
>> defined for each memory client.. it's not very practical in terms of
>> memory frequency scaling.
>>
>> Take Display Controller for example, it has a memory client for each
>> display (overlay) plane. If planes are not overlapping on the displayed
>> area, then the required total memory bandwidth equals to the peak
>> bandwidth selected among of the visible planes. But if planes are
>> overlapping, then the bandwidths of each overlapped plane are
>> accumulated because overlapping planes will issue read request
>> simultaneously for the overlapping areas.
>>
>> The Memory Controller doesn't have any knowledge about the Display
>> Controller's specifics. Thus in the end it should be a responsibility of
>> Display Controller's driver to calculate the required bandwidth for the
>> hardware unit, since only the driver has all required knowledge about
>> planes overlapping state and whatnot.
> 
> I agree that the device-specific knowledge should live in the device-
> specific drivers. However, what you're doing above is basically putting
> the OS-specific knowledge into the device tree.
> 
> The memory client interfaces are a real thing in hardware that can be
> described using the corresponding IDs. But there is no such thing as the
> "BPMP" memory client. Rather it's composed of the other four.
> 
> So I think a better thing to do would be for the consumer driver to deal
> with all of that. If looking at only bandwidth, the consumer driver can
> simply pick any one of the clients/paths for any of the bandwidth
> requests and for per-interface settings like latency allowance the
> consumer would choose the appropriate path.

Will be good if we could avoid doing things like that because it doesn't
sound very nice :) Although, it should work.

On older Tegra SoCs Memory Controller has a hardware ID for each of the
clients module and we're using these IDs already for the MC resets.
Don't you think that we could use these IDs for ICC?

Are you sure that newer SoCs do not have these IDs too or maybe they are
kept private now?

>> The similar applies to multimedia things, like GPU or Video Decoder.
>> They have multiple memory clients and (I'm pretty sure that) nobody is
>> going to calculate memory bandwidth requirements for every client, it's
>> simply impractical.
>>
>> So, I'm suggesting that we should have a single "dma-mem" ICC path for
>> every hardware unit.
>>
>> The rest of the ICC paths could be memory_client -> memory_controller
>> paths, providing knobs for things like MC arbitration (latency)
>> configuration for each memory client. I think this variant of
>> description is actually closer to the hardware, since the client's
>> arbitration configuration ends in the Memory Controller.
> 
> Not necessarily. The target of the access doesn't always have to be the
> EMC. It could equally well be IRAM, in which case there are additional
> controls that need to be programmed within the MC to allow the memory
> client to access IRAM. If you don't have a phandle to IRAM in the
> interconnect properties, there's no way to make this distinction.

Could you please clarify what do you mean by the "memory client" here?
Do you mean the whole hardware module/unit or each memory client of the
hardware module that needs to be programmed for the IRAM accessing?



[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux