On Tue, Jun 26, 2012 at 08:48:18PM -0600, Stephen Warren wrote: > On 06/26/2012 08:32 PM, Mark Zhang wrote: > >> On 06/26/2012 07:46 PM, Mark Zhang wrote: > >>>>> On Tue, 26 Jun 2012 12:55:13 +0200 > >>>>> Thierry Reding <thierry.reding@xxxxxxxxxxxxxxxxx> wrote: > >> ... > >>>> I'm not sure I understand how information about the carveout would be > >>>> obtained from the IOMMU API, though. > >>> > >>> I think that can be similar with current gart implementation. Define carveout as: > >>> > >>> carveout { > >>> compatible = "nvidia,tegra20-carveout"; > >>> size = <0x10000000>; > >>> }; > >>> > >>> Then create a file such like "tegra-carveout.c" to get these definitions and > >> register itself as platform device's iommu instance. > >> > >> The carveout isn't a HW object, so it doesn't seem appropriate to define a DT > >> node to represent it. > > > > Yes. But I think it's better to export the size of carveout as a configurable item. > > So we need to define this somewhere. How about define carveout as a property of gart? > > There already exists a way of preventing Linux from using certain chunks > of memory; the /memreserve/ syntax. From a brief look at the dtc source, > it looks like /memreserve/ entries can have labels, which implies that a > property in the GART node could refer to the /memreserve/ entry by > phandle in order to know what memory regions to use. Wasn't the whole point of using a carveout supposed to be a replacement for the GART? As such I'd think the carveout should rather be a property of the host1x device. AIUI what we want to do is have a large contiguous region of memory that a central component (host1x) manages as a pool from which clients (DRM, V4L, ...) can allocate buffers as needed. Since all of this memory will be contiguous anyway there isn't much use for the GART anymore. But maybe I'm misunderstanding. Thierry
Attachment:
pgpE0hvoyPdn_.pgp
Description: PGP signature