Re: [RFC v3 5/8] dmabuf: Add gpu cgroup charge transfer function

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 21, 2022 at 10:45 AM Michal Koutný <mkoutny@xxxxxxxx> wrote:
>
> Hello.
>
> On Wed, Mar 09, 2022 at 04:52:15PM +0000, "T.J. Mercier" <tjmercier@xxxxxxxxxx> wrote:
> > +int dma_buf_charge_transfer(struct dma_buf *dmabuf, struct gpucg *gpucg)
> > +{
> > +#ifdef CONFIG_CGROUP_GPU
> > +     struct gpucg *current_gpucg;
> > +     int ret = 0;
> > +
> > +     /*
> > +      * Verify that the cgroup of the process requesting the transfer is the
> > +      * same as the one the buffer is currently charged to.
> > +      */
> > +     current_gpucg = gpucg_get(current);
> > +     mutex_lock(&dmabuf->lock);
> > +     if (current_gpucg != dmabuf->gpucg) {
> > +             ret = -EPERM;
> > +             goto err;
> > +     }
>
> Add a shortcut for gpucg == current_gpucg?

Good idea, thank you!

>
> > +
> > +     ret = gpucg_try_charge(gpucg, dmabuf->gpucg_dev, dmabuf->size);
> > +     if (ret)
> > +             goto err;
> > +
> > +     dmabuf->gpucg = gpucg;
> > +
> > +     /* uncharge the buffer from the cgroup it's currently charged to. */
> > +     gpucg_uncharge(current_gpucg, dmabuf->gpucg_dev, dmabuf->size);
>
> I think gpucg_* API would need to cater for such transfers too since
> possibly transitional breach of a limit during the transfer may
> unnecessarily fail the operation.

Since the charge is duplicated in two cgroups for a short period
before it is uncharged from the source cgroup I guess the situation
you're thinking about is a global (or common ancestor) limit? I can
see how that would be a problem for transfers done this way and an
alternative would be to swap the order of the charge operations: first
uncharge, then try_charge. To be certain the uncharge is reversible if
the try_charge fails, I think I'd need either a mutex used at all
gpucg_*charge call sites or access to the gpucg_mutex, which implies
adding transfer support to gpu.c as part of the gpucg_* API itself and
calling it here. Am I following correctly here?

This series doesn't actually add limit support just accounting, but
I'd like to get it right here.

>
> My 0.02€,
> Michal

Thanks!




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux