Re: [RFC PATCH v3 04/11] drm, cgroup: Add total GEM buffer allocation limit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jun 28, 2019 at 02:43:18PM -0400, Kenny Ho wrote:
> On Thu, Jun 27, 2019 at 5:24 PM Daniel Vetter <daniel@xxxxxxxx> wrote:
> > On Thu, Jun 27, 2019 at 02:42:43PM -0400, Kenny Ho wrote:
> > > Um... I am going to get a bit philosophical here and suggest that the
> > > idea of sharing (especially uncontrolled sharing) is inherently at odd
> > > with containment.  It's like, if everybody is special, no one is
> > > special.  Perhaps an alternative is to make this configurable so that
> > > people can allow sharing knowing the caveat?  And just to be clear,
> > > the current solution allows for sharing, even between cgroup.
> >
> > The thing is, why shouldn't we just allow it (with some documented
> > caveat)?
> >
> > I mean if all people do is share it as your current patches allow, then
> > there's nothing funny going on (at least if we go with just leaking the
> > allocations). If we allow additional sharing, then that's a plus.
> Um... perhaps I was being overly conservative :).  So let me
> illustrate with an example to add more clarity and get more comments
> on it.
> 
> Let say we have the following cgroup hierarchy (The letters are
> cgroups with R being the root cgroup.  The numbers in brackets are
> processes.  The processes are placed with the 'No Internal Process
> Constraint' in mind.)
> R (4, 5) ------ A (6)
>   \
>     B ---- C (7,8)
>      \
>        D (9)
> 
> Here is a list of operation and the associated effect on the size
> track by the cgroups (for simplicity, each buffer is 1 unit in size.)
> With current implementation (charge on buffer creation with
> restriction on sharing.)
> R   A   B   C   D   |Ops
> ================
> 1   0   0   0   0   |4 allocated a buffer
> 1   0   0   0   0   |4 shared a buffer with 5
> 1   0   0   0   0   |4 shared a buffer with 9
> 2   0   1   0   1   |9 allocated a buffer
> 3   0   2   1   1   |7 allocated a buffer
> 3   0   2   1   1   |7 shared a buffer with 8
> 3   0   2   1   1   |7 sharing with 9 (not allowed)
> 3   0   2   1   1   |7 sharing with 4 (not allowed)
> 3   0   2   1   1   |7 release a buffer
> 2   0   1   0   1   |8 release a buffer from 7

This is your current implementation, right? Let's call it A.

> The suggestion as I understand it (charge per buffer reference with
> unrestricted sharing.)
> R   A   B   C   D   |Ops
> ================
> 1   0   0   0   0   |4 allocated a buffer
> 2   0   0   0   0   |4 shared a buffer with 5
> 3   0   0   0   1   |4 shared a buffer with 9
> 4   0   1   0   2   |9 allocated a buffer
> 5   0   2   1   1   |7 allocated a buffer
> 6   0   3   2   1   |7 shared a buffer with 8
> 7   0   4   2   2   |7 sharing with 9
> 8   0   4   2   2   |7 sharing with 4
> 7   0   3   1   2   |7 release a buffer
> 6   0   2   0   2   |8 release a buffer from 7
> 
> Is this a correct understanding of the suggestion?

Yup that's one option I think. The other option (and it's probably
simpler), is to go with your current accounting, but drop the sharing
restriction. I.e. buffers are accounting to whomever allocates them first,
not who's all using them. For memcg this has some serious trouble with
cgroups not getting cleaned up due to leaked refrences. But for gem bo we
spread the references in a lot more controlled manner, and all the
long-lived references are under control of userspace.

E.g. if Xorg fails to clean up bo references of clients that dead, that's
clearly an Xorg bug and needs to be fixed there. But not something we need
to allow as a valid use-cases. For page references/accounting in memcg
this is totally different, since pages can survive in the pagecache
forever. No such bo-cache or anything similar exists for gem_bo.

Personally I prefer option A, but on sharing restriction. If you want that
sharing restriction, we need to figure out how to implement it using
something else. Plus we need to make sure all possible ways to share a bo
are covered (and there are many).
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
amd-gfx mailing list
amd-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/amd-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux