Re: [PATCH v4 3/7] dma-buf: heaps: restricted_heap: Add private heap ops

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 12, 2024 at 05:23:07PM -0800, John Stultz wrote:
> On Fri, Jan 12, 2024 at 4:13 PM Jeffrey Kardatzke <jkardatzke@xxxxxxxxxx> wrote:
> > On Fri, Jan 12, 2024 at 3:51 PM John Stultz <jstultz@xxxxxxxxxx> wrote:
> > >
> > > On Fri, Jan 12, 2024 at 3:27 PM Jeffrey Kardatzke <jkardatzke@xxxxxxxxxx> wrote:
> > > > On Fri, Jan 12, 2024 at 2:52 PM John Stultz <jstultz@xxxxxxxxxx> wrote:
> > > > > I know part of this effort here is to start to centralize all these
> > > > > vendor specific restricted buffer implementations, which I think is
> > > > > great, but I just worry that in doing it in the dmabuf heap interface,
> > > > > it is a bit too user-facing. The likelihood of encoding a particular
> > > > > semantic to the singular "restricted_heap" name seems high.
> > > >
> > > > In patch #5 it has the actual driver implementation for the MTK heap
> > > > that includes the heap name of "restricted_mtk_cm", so there shouldn't
> > > > be a heap named "restricted_heap" that's actually getting exposed. The
> > >
> > > Ah, I appreciate that clarification! Indeed, you're right the name is
> > > passed through. Apologies for missing that detail.
> > >
> > > > restricted_heap code is a framework, and not a driver itself.  Unless
> > > > I'm missing something in this patchset...but that's the way it's
> > > > *supposed* to be.
> > >
> > > So I guess I'm not sure I understand the benefit of the extra
> > > indirection. What then does the restricted_heap.c logic itself
> > > provide?
> > > The dmabuf heaps framework already provides a way to add heap implementations.
> >
> > So in the v1 patchset, it was done with just a Mediatek specific heap
> > with no framework or abstractions for another vendor to build on top
> > of. The feedback was to make this more generic since Mediatek won't be
> > the only vendor who wants a restricted heap..which is how it ended up
> > here. There was more code in the framework before relating to TEE
> > calls, but then that was moved to the vendor specific code since not
> > all restricted heaps are allocated through a TEE.
> 
> Yeah. I apologize, as I know how frustrating the contradictory
> feedback can be. I don't mean to demotivate. :(
> 
> I think folks would very much like to see consolidation around the
> various implementations, and I agree!
> I just worry that creating the common framework for this concept in a
> dmabuf heaps driver is maybe too peripheral/close to userland.
> 
> > This was also desirable for the V4L2 pieces since there's going to be
> > a V4L2 flag set when using restricted dma_bufs (and it wants to
> > validate that)....so in order to keep that more generic, there should
> > be a higher level concept of restricted dma_bufs that isn't specific
> > to a single vendor.  One other thing that would ideally come out of
> > this is a cleaner way to check that a dma_buf is restricted or not.
> 
> Yeah. If there is a clear meaning to "restricted" here, I think having
> a query method on the dmabuf is reasonable.
> My only fret is if the meaning is too vague and userland starts
> depending on it meaning what it meant for vendor1, but doesn't mean
> for vendor2.
> 
> So we need some clarity in what "restricted" really means.  For
> instance, it being not cpu mappable vs other potential variations like
> being cpu mappable, but not cpu accessible.  Or not cpu mappable, but
> only mappable between a set of 3 devices (Which 3 devices?! How can we
> tell?).
> 
Can we flip things around? I.e., instead of saying which devices are
allowed to use the restricted buffer, can we instead say where it's not
allowed to be used? My view has been that by default the contents of the
types of buffers where talking about here is only accessible to things
running on the secure side, i.e, typically S-EL3, S-EL1 and a specific
Trusted Application running in S-EL0. I guess that serves as some kind
of baseline. 

>From there, things turns to a more dynamic nature, where firewalls etc,
can be configured to give access to various IPs, blocks and runtimes.

I understand that it's nice to be able to know all this from the Linux
kernel point of view, but does it have to be aware of this? What's the
major drawback if Linux doesn't know about it?

> And if there is variation, maybe we need to enumerate the types of
> "restricted" buffers so we can be specific when it's queried.
> 
> That's where maybe having the framework for this be more central or
> closer to the kernel mm code and not just a sub-type of a dmabuf heap
> driver might be better?
> 
> > The current V4L2 patchset just attaches the dma_buf and then checks if
> > the page table is empty....and if so, it's restricted. But now I see
> > there's other feedback indicating attaching a restricted dma_buf
> > shouldn't even be allowed, so we'll need another strategy for
> > detecting them. Ideally there is some function/macro like
> > is_dma_buf_restricted(struct dma_buf*) that can indicate that...but we
> > haven't come up with a good way to do that yet which doesn't involve
> > adding another field to dma_buf or to dma_buf_ops (and if such a thing
> > would be fine, then OK...but I had assumed we would get pushback on
> > modifying either of those structs).
> 
> If there's a need and the best place to put something is in the
> dma_buf or dma_buf_ops, that's where it should go.  Folks may
> reasonably disagree if it's the best place (there may be yet better
> spots for the state to sit in the abstractions), but for stuff going
> upstream, there's no reason to try to hack around things or smuggle
> state just to avoid changing core structures. Especially if core
> structures are internal only and have no ABI implications.
> 
> Sima's suggestion that attachments should fail if the device cannot
> properly map the restricted buffer makes sense to me. Though I don't
> quite see why all attachments should fail, and I don't really like the
> idea of a private api, but I need to look more at the suggested virtio
> example (but even they said that wasn't their preferred route).
> 
> My sense of attach was only that it was supposed to connect a device's
> interest in the buffer, allowing lazy allocation to satisfy various
> device constraints before first mapping - a design feature that I
> don't think anyone ever implemented.  So my sense was it didn't have
> much meaning otherwise (but was a requirement to call before map). But
> that may have evolved since the early days.
> 
> And I'm sure the method to figure out if the attachment can work with
> the device may be complicated/difficult, so it sounding reasonable can
> be far from it being reasonable to implement.
> 
> And again, I don't mean to frustrate or demotivate here. I'm really
> excited to see this effort being pushed upstream!
> 
> thanks
> -john

-- 
// Regards
Joakim



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux