Re: [RFC PATCH v2 4/5] drm, cgroup: Add total GEM buffer allocation limit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 10.05.19 um 16:57 schrieb Kenny Ho:
> [CAUTION: External Email]
>
> On Fri, May 10, 2019 at 8:28 AM Christian König
> <ckoenig.leichtzumerken@xxxxxxxxx> wrote:
>> Am 09.05.19 um 23:04 schrieb Kenny Ho:
>>> +     /* only allow bo from the same cgroup or its ancestor to be imported */
>>> +     if (drmcgrp != NULL &&
>>> +                     !drmcgrp_is_self_or_ancestor(drmcgrp, obj->drmcgrp)) {
>>> +             ret = -EACCES;
>>> +             goto out_unlock;
>>> +     }
>>> +
>> This will most likely go up in flames.
>>
>> If I'm not completely mistaken we already use
>> drm_gem_prime_fd_to_handle() to exchange handles between different
>> cgroups in current container usages.
> This is something that I am interested in getting more details from
> the broader community because the details affect how likely this will
> go up in flames ;).  Note that this check does not block sharing of
> handles from cgroup parent to children in the hierarchy, nor does it
> blocks sharing of handles within a cgroup.
>
> I am interested to find out, when existing apps share handles between
> containers, if there are any expectations on resource management.
> Since there are no drm cgroup for current container usage, I expect
> the answer to be no.  In this case, the drm cgroup controller can be
> disabled on its own (in the context of cgroup-v2's unified hierarchy),
> or the process can remain at the root for the drm cgroup hierarchy (in
> the context of cgroup-v1.)  If I understand the cgroup api correctly,
> that means all process would be part of the root cgroup as far as the
> drm controller is concerned and this block will not come into effect.
> I have verified that this is indeed the current default behaviour of a
> container runtime (runc, which is used by docker, podman and others.)
> The new drm cgroup controller is simply ignored and all processes
> remain at the root of the hierarchy (since there are no other
> cgroups.)  I plan to make contributions to runc (so folks can actually
> use this features with docker/podman/k8s, etc.) once things stabilized
> on the kernel side.

So the drm cgroup container is separate to other cgroup containers?

In other words as long as userspace doesn't change, this wouldn't have 
any effect?

Well that is unexpected cause then a processes would be in different 
groups for different controllers, but if that's really the case that 
would certainly work.

> On the other hand, if there are expectations for resource management
> between containers, I would like to know who is the expected manager
> and how does it fit into the concept of container (which enforce some
> level of isolation.)  One possible manager may be the display server.
> But as long as the display server is in a parent cgroup of the apps'
> cgroup, the apps can still import handles from the display server
> under the current implementation.  My understanding is that this is
> most likely the case, with the display server simply sitting at the
> default/root cgroup.  But I certainly want to hear more about other
> use cases (for example, is running multiple display servers on a
> single host a realistic possibility?  Are there people running
> multiple display servers inside peer containers?  If so, how do they
> coordinate resources?)

We definitely have situations with multiple display servers running 
(just think of VR).

I just can't say if they currently use cgroups in any way.

Thanks,
Christian.

>
> I should probably summarize some of these into the commit message.
>
> Regards,
> Kenny
>
>
>
>> Christian.
>>

_______________________________________________
amd-gfx mailing list
amd-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/amd-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux