Re: [PATCH v7 1/6] gpu: rfc: Proposal for a GPU cgroup controller

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 19, 2022 at 2:31 AM <eballetbo@xxxxxxxxxx> wrote:
>
> From: Enric Balletbo i Serra <eballetbo@xxxxxxxxxx>
>
> On Tue, 10 May 2022 23:56:45 +0000, T.J. Mercier wrote:
> > From: Hridya Valsaraju <hridya@xxxxxxxxxx>
> >
>
> Hi T.J. Mercier,
>
> Many thanks for this effort. It caught my attention because we might have a use
> case where this feature can be useful for us. Hence I'd like to jump and be part
> of the discussion, I'd really appreciate if you can cc'me for next versions.
>
Hi Enric,

Sure thing, thanks for engaging.

> While reading the full patchset I was a bit confused about the status of this
> proposal. In fact, the rfc in the subject combined with the number of iterations
> (already seven) confused me. So I'm wondering if this is a RFC or a 'real'
> proposal already that you want to land.
>
I'm sorry about this. I'm quite new to kernel development (this is my
first set of patches) and the point at which I should have
transitioned from RFC to PATCH was not clear to me. The status now
could be described as adding initial support for accounting that would
be built upon to expand what is tracked (more than just buffers from
heaps) and to add support for limiting. I see you have also commented
about this below.

> If this is still a RFC I'd remove the 'rfc: Proposal' and use the more canonical
> way that is put RFC in the []. I.e [PATCH RFC v7] cgroup: Add a GPU cgroup
> controller.
>
> If it is not, I'd just remove the RFC and make the subject in the cgroup
> subsystem instead of the gpu. I.E [PATCH v7] cgroup: Add a GPU cgroup
>
> I don't want to nitpick but IMO that helps new people to join to the history of
> the patchset.
>
> > This patch adds a proposal for a new GPU cgroup controller for
> > accounting/limiting GPU and GPU-related memory allocations.
>
> As far as I can see the only thing that is adding here is the accounting, so I'd
> remove any reference to limiting and just explain what the patch really
> introduces, not the future, otherwise is confusing an you expect more than the
> patch really does.
>
> It is important maintain the commit message sync with what the patch really
> does.
>
Acknowledged, thank you.

> > The proposed controller is based on the DRM cgroup controller[1] and
> > follows the design of the RDMA cgroup controller.
> >
> > The new cgroup controller would:
> > * Allow setting per-device limits on the total size of buffers
> >   allocated by device within a cgroup.
> > * Expose a per-device/allocator breakdown of the buffers charged to a
> >   cgroup.
> >
> > The prototype in the following patches is only for memory accounting
> > using the GPU cgroup controller and does not implement limit setting.
> >
> > [1]: https://lore.kernel.org/amd-gfx/20210126214626.16260-1-brian.welty@xxxxxxxxx/
> >
>
> I think this is material for the cover more than the commit message. When I read
> this I was expecting all this in this patch.
>
> > Signed-off-by: Hridya Valsaraju <hridya@xxxxxxxxxx>
> > Signed-off-by: T.J. Mercier <tjmercier@xxxxxxxxxx>
> > ---
> > v7 changes
> > Remove comment about duplicate name rejection which is not relevant to
> > cgroups users per Michal Koutný.
> >
> > v6 changes
> > Move documentation into cgroup-v2.rst per Tejun Heo.
> >
> > v5 changes
> > Drop the global GPU cgroup "total" (sum of all device totals) portion
> > of the design since there is no currently known use for this per
> > Tejun Heo.
> >
> > Update for renamed functions/variables.
> >
> > v3 changes
> > Remove Upstreaming Plan from gpu-cgroup.rst per John Stultz.
> >
> > Use more common dual author commit message format per John Stultz.
> > ---
> >  Documentation/admin-guide/cgroup-v2.rst | 23 +++++++++++++++++++++++
> >  1 file changed, 23 insertions(+)
> >
> > diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
> > index 69d7a6983f78..2e1d26e327c7 100644
> > --- a/Documentation/admin-guide/cgroup-v2.rst
> > +++ b/Documentation/admin-guide/cgroup-v2.rst
> > @@ -2352,6 +2352,29 @@ first, and stays charged to that cgroup until that resource is freed. Migrating
> >  a process to a different cgroup does not move the charge to the destination
> >  cgroup where the process has moved.
> >
> > +
> > +GPU
> > +---
> > +
> > +The GPU controller accounts for device and system memory allocated by the GPU
> > +and related subsystems for graphics use. Resource limits are not currently
> > +supported.
> > +
> > +GPU Interface Files
> > +~~~~~~~~~~~~~~~~~~~~
> > +
> > +  gpu.memory.current
> > +     A read-only file containing memory allocations in flat-keyed format. The key
> > +     is a string representing the device name. The value is the size of the memory
> > +     charged to the device in bytes. The device names are globally unique.::
> > +
> > +       $ cat /sys/kernel/fs/cgroup1/gpu.memory.current
>
> I think this is outdated, you are using cgroup v2, right?
>
Oh "cgroup1" was meant to refer to the name of a cgroup, not to cgroup
v1. A different name would be better here.

> > +       dev1 4194304
> > +       dev2 104857600
> > +
>
> When I applied the full series I was expecting see the memory allocated by the
> gpu devices or users of the gpu in this file but, after some experiments, what I
> saw is the memory allocated via any process that uses the dma-buf heap API (not
> necessary gpu users). For example, if you create a small program that allocates
> some memory via the dma-buf heap API and then you cat the gpu.memory.current
> file, you see that the memory accounted is not related to the gpu.
>
> This is really confusing, looks to me that the patches evolved to account memory
> that is not really related to the GPU but allocated vi the dma-buf heap API. IMO
> the name of the file should be according to what really does to avoid
> confusions.
>
> So, is this patchset meant to be GPU specific? If the answer is yes that's good
> but that's not what I experienced. I'm missing something?
>
There are two reasons this exists as a GPU controller. The first is
that most graphics buffers in Android come from these heaps, and this
is primarily what we are interested in accounting. However the idea is
to account other graphics memory types more commonly used on desktop
under different resource names with this controller. The second reason
predates my involvement, but my understanding is that Hridya tried to
upstream heap tracking via tracepoints but was asked to try to use GPU
cgroups instead, which led to her initial version of this series. So
this is a starting point. Any commentary on why this controller would
our would not work for any use cases you have in mind (provided the
appropriate charging/uncharging code is plugged in) would be
appreciated!

By the way, discussion around earlier proposals on this topic
suggested the "G" should be for "general" instead of "graphics", I
think in recognition of the breadth of resources that would eventually
be tracked by it.
https://lore.kernel.org/amd-gfx/YBp4ap+1l2KWbqEJ@phenom.ffwll.local/



> If the answer is that evolved to track dma-buf heap allocations I think all the
> patches need some rework to adapt the wording as right now, the gpu wording
> seems confusing to me.
>
> > +     The device name string is set by a device driver when it registers with the
> > +     GPU cgroup controller to participate in resource accounting.
> > +
> >  Others
> >  ------
> >
> >
> Thanks,
>  Enric
>




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux