Re: Tegra DRM device tree bindings

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 26.06.2012 16:41, Thierry Reding wrote:

> On Tue, Jun 26, 2012 at 04:01:05PM +0300, Terje Bergström wrote:
>> We also assign certain host1x common resources per device by convention,
>> f.ex. sync points, channels etc. We currently encode that information in
>> the device node (3D uses sync point number X, 2D uses numbers Y and Z).
>> The information is not actually describing hardware, as it just
>> describes the convention, so I'm not sure if device tree is the proper
>> place for it.
> Are they configurable? If so I think we should provide for them being
> specified in the device tree. They are still hardware resources being
> assigned to devices.


Yes, they're configurable, and there's nothing hardware specific in the
assignment of a sync point to a particular use. It's all just a software
agreement. That's why I'm a bit hesitant on putting it in device trees,
which are supposed to only describe hardware.

>> Yes, we already have a bus_type for nvhost, and we have nvhost_device
>> and nvhost_driver that device from device and device_driver
>> respectively. They all accommodate some host1x client device common
>> behavior and data that we need to store. We use the bus_type also to
>> match each device and driver together, but the matching is version
>> sensitive. For example, Tegra2 3D needs different driver than Tegra3 3D.
> 
> We'll have to figure out the best place to put this driver. The driver
> will need some code to instantiate its children from the DT and fill the
> nvhost_device structures with the data parsed from it.


True. We could say that the host1x driver is the "father", and will have
to instantiate the nvhost device structs for the children. We just have
to ensure the correct ordering at boot-up.

> BTW, what's the reason for calling it nvhost and not host1x?


When I started, there was only one driver and one device, and all client
devices were just hidden as internal implementation details. Thus the
driver wasn't really "host1x" driver. Now we refer to the collection of
drivers for host1x and client devices as nvhost.

>> Either way is fine for me. The full addresses are more familiar to me as
>> we tend to use them internally.
> Using the OF mechanism for translating the host1x bus addresses,
> relative to the host1x base address, to CPU addresses seems "purer", but
> either way should work fine.


I'll let you decide, as I don't have a strong opinion either way. I
guess whatever is the more common way wins.

>> We use carveout for Tegra2. Memory management is a big question mark
>> still for tegradrm that I'm trying to find a solution for.
> AIUI CMA is one particular implementation of the carveout concept, so I
> think we should use it, or extend it if it doesn't suit us.


Here I'd refer to Hiroshi's message: host1x driver doesn't need to know
the details of what memory management we use. We'll just hide that fact
behind one of the memory management APIs that nvhost uses.

Terje
--
To unsubscribe from this list: send the line "unsubscribe linux-tegra" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [ARM Kernel]     [Linux ARM]     [Linux ARM MSM]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux