Re: [PATCH v2] dt-bindings: media: Add sram-size Property for Wave5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/5/24 8:12 AM, Nishanth Menon wrote:
On 17:08-20240202, Krzysztof Kozlowski wrote:
On 02/02/2024 16:58, Nishanth Menon wrote:
On 14:56-20240202, Krzysztof Kozlowski wrote:
On 02/02/2024 13:52, Nishanth Menon wrote:
On 11:47-20240202, Krzysztof Kozlowski wrote:
On 01/02/2024 19:42, Brandon Brnich wrote:
Wave521c has capability to use SRAM carveout to store reference data with
purpose of reducing memory bandwidth. To properly use this pool, the driver
expects to have an sram and sram-size node. Without sram-size node, driver
will default value to zero, making sram node irrelevant.

I am sorry, but what driver expects should not be rationale for new
property. This justification suggests clearly it is not a property for DT.


Yup, the argumentation in the commit message is from the wrong
perspective. bindings are OS agnostic hardware description, and what
driver does with the description is driver's problem.

I will at least paraphrase my understanding:
In this case, however, the hardware block will limp along with
the usage of DDR (as is the current description), due to the
latencies involved for DDR accesses. However, the hardware block
has capability to use a substantially lower latency SRAM to provide
proper performance and hence for example, deal with higher resolution
data streams. This SRAM is instantiated at SoC level rather than
embedded within the hardware block itself.

That sounds like OS policy. Why would different boards with the same
component have this set differently? Based on amount of available
memory? This, I believe, is runtime configuration because it might
depend on user-space you run. Based on purpose (e.g. optimize for
decoding or general usage)? Again, run-time because same hardware board
can be used for different purposes.


Why is this OS policy? It is a hardware capability.

How amount of SRAM size is hardware capability? Each hardware can work
probably with 1, 2 or 100 pages.

Traditionally
many similar hardware blocks would have allocated local SRAM for
worst case inside the hardware block itself and don't need to use
DDR in the first place. However, for this hardware block, it has
capability to use some part of one of the many SRAM blocks in the SoC,
not be shared for some part of the system - so from a hardware
description perspective, we will need to call that out as to which
SRAM is available for the hardware block.

Just because more than one device wants some memory, does not mean this
is hardware property. Drivers can ask how much memory available there
is. OS knows how many users of memory there is, so knows how much to
allocate for each device.


Why would different boards need this differently? simply because
different cameras have different resolution and framerates - and you
dont want to pay the worst case sram penalty for all product
configuration.

Choice of resolution and framerate is runtime choice or use-case
dependent, not board level configuration, therefore amount of SRAM size
to use is as well.

I am arguing this is similar to what we have for remote-procs. Yes,
usecases usage come to a conclusion that sram size X is needed. Sure.
Lets even argue that sram = <&sram> has X+100 bytes available, so as
far as allocator is concerned, it can allocate. But how does the driver
know that 1k of that sram is already used by a remote core or some other
function?

The driver could ask the SRAM provider all that info (gen_pool_avail()).
The driver should at runtime determine the amount of SRAM it needs and
only then attempt to allocate the amount it needed. (and free it after
it is done too)

There is no need to hardcode the amount we think our usecase will need
into DT. Sure it may fail to get as much as it wants if we don't carveout
some out up front, but that is true for any memory allocation. Let's not
turn DT into a static memory allocator tool..


This is no different from a remote proc usecase, following which, I
wonder if "memory-region" is the right way to describe this usage? That
would be a very specific bucket that is made available to the driver.
And I'd say sram and memory-region are two mutually exclusive option?


Our use of "memory-region" for remote proc memory was also a questionable
thing to do IMHO. Something we will have to cleanup at some point. Might be
good to not call too much attention to that :)

Andrew



Further, Linux is not the only thing that runs on these SoCs.. these are
mixed systems with autonomous operations of uC cores who may or maynot
(typically not) even need to communicate with MPU to state which part of
resource they are hogging (hence the board level definition).

OK that could be the case but I could also say choice of RTOS or any
other is also board-independent.




[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux