Re: Tegra DRM device tree bindings

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I'm not sure what your exact plans are for the direction in which the
DRM driver should head, as I'm still a bit out of the loop as many of
those matters were only discussed internally at NVIDIA or with some NDA
developers. But I'll still try to get into the discussion.

Am Mittwoch, den 27.06.2012, 07:14 +0200 schrieb Thierry Reding:
> On Tue, Jun 26, 2012 at 08:48:18PM -0600, Stephen Warren wrote:
> > On 06/26/2012 08:32 PM, Mark Zhang wrote:
> > >> On 06/26/2012 07:46 PM, Mark Zhang wrote:
> > >>>>> On Tue, 26 Jun 2012 12:55:13 +0200
> > >>>>> Thierry Reding <thierry.reding@xxxxxxxxxxxxxxxxx> wrote:
> > >> ...
> > >>>> I'm not sure I understand how information about the carveout would be
> > >>>> obtained from the IOMMU API, though.
> > >>>
> > >>> I think that can be similar with current gart implementation. Define carveout as:
> > >>>
> > >>> carveout {
> > >>>         compatible = "nvidia,tegra20-carveout";
> > >>>         size = <0x10000000>;
> > >>> };
> > >>>
> > >>> Then create a file such like "tegra-carveout.c" to get these definitions and
> > >> register itself as platform device's iommu instance.
> > >>
> > >> The carveout isn't a HW object, so it doesn't seem appropriate to define a DT
> > >> node to represent it.
> > > 
> > > Yes. But I think it's better to export the size of carveout as a configurable item.
> > > So we need to define this somewhere. How about define carveout as a property of gart?
> > 
> > There already exists a way of preventing Linux from using certain chunks
> > of memory; the /memreserve/ syntax. From a brief look at the dtc source,
> > it looks like /memreserve/ entries can have labels, which implies that a
> > property in the GART node could refer to the /memreserve/ entry by
> > phandle in order to know what memory regions to use.
> 
> Wasn't the whole point of using a carveout supposed to be a replacement
> for the GART? As such I'd think the carveout should rather be a property
> of the host1x device.
> 
In my understanding carveout is neither a hardware nor software
component. It's just a somewhat special pool of memory. As I pointed out
in one of the older mails, carveout can not completely replace GART.
While normal allocations for graphics use should be done contiguous,
GART allows us to link normal scattered sysram buffers into GPU address
space, which is a nice thing to have.
IMHO if carveout is to be used exclusively by the GPU (i.e. the DRM
driver) it should be a property of the host1x device.

> AIUI what we want to do is have a large contiguous region of memory that
> a central component (host1x) manages as a pool from which clients (DRM,
> V4L, ...) can allocate buffers as needed. Since all of this memory will
> be contiguous anyway there isn't much use for the GART anymore.
> 
I think this is the wrong way to go. Having a special memory pool
managed by some driver adds one more allocator to the kernel, which is
clearly not desirable. If we want a special mem region for GPU use, we
should not share this memory pool with other components.

But if we want a mem region for contig allocations used by many
components, which seems to be consensus here, CMA is the way to go. In
this case I think we don't want to bother with the carveout property at
all at the DRM driver level. Such a shared mem region managed by CMA
should be defined at a higher level of the device tree.

Thanks,
Lucas


_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux