Re: [PATCH v4 12/13] drm/msm: Utilize gpu scheduler priorities

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 26, 2022 at 6:29 AM Tvrtko Ursulin
<tvrtko.ursulin@xxxxxxxxxxxxxxx> wrote:
>
>
> On 26/05/2022 04:15, Rob Clark wrote:
> > On Wed, May 25, 2022 at 9:11 AM Tvrtko Ursulin
> > <tvrtko.ursulin@xxxxxxxxxxxxxxx> wrote:
> >>
> >>
> >> On 24/05/2022 15:57, Rob Clark wrote:
> >>> On Tue, May 24, 2022 at 6:45 AM Tvrtko Ursulin
> >>> <tvrtko.ursulin@xxxxxxxxxxxxxxx> wrote:
> >>>>
> >>>> On 23/05/2022 23:53, Rob Clark wrote:
> >>>>>
> >>>>> btw, one fun (but unrelated) issue I'm hitting with scheduler... I'm
> >>>>> trying to add an igt test to stress shrinker/eviction, similar to the
> >>>>> existing tests/i915/gem_shrink.c.  But we hit an unfortunate
> >>>>> combination of circumstances:
> >>>>> 1. Pinning memory happens in the synchronous part of the submit ioctl,
> >>>>> before enqueuing the job for the kthread to handle.
> >>>>> 2. The first run_job() callback incurs a slight delay (~1.5ms) while
> >>>>> resuming the GPU
> >>>>> 3. Because of that delay, userspace has a chance to queue up enough
> >>>>> more jobs to require locking/pinning more than the available system
> >>>>> RAM..
> >>>>
> >>>> Is that one or multiple threads submitting jobs?
> >>>
> >>> In this case multiple.. but I think it could also happen with a single
> >>> thread (provided it didn't stall on a fence, directly or indirectly,
> >>> from an earlier submit), because of how resume and actual job
> >>> submission happens from scheduler kthread.
> >>>
> >>>>> I'm not sure if we want a way to prevent userspace from getting *too*
> >>>>> far ahead of the kthread.  Or maybe at some point the shrinker should
> >>>>> sleep on non-idle buffers?
> >>>>
> >>>> On the direct reclaim path when invoked from the submit ioctl? In i915
> >>>> we only shrink idle objects on direct reclaim and leave active ones for
> >>>> the swapper. It depends on how your locking looks like whether you could
> >>>> do them, whether there would be coupling of locks and fs-reclaim context.
> >>>
> >>> I think the locking is more or less ok, although lockdep is unhappy
> >>> about one thing[1] which is I think a false warning (ie. not
> >>> recognizing that we'd already successfully acquired the obj lock via
> >>> trylock).  We can already reclaim idle bo's in this path.  But the
> >>> problem with a bunch of submits queued up in the scheduler, is that
> >>> they are already considered pinned and active.  So at some point we
> >>> need to sleep (hopefully interruptabley) until they are no longer
> >>> active, ie. to throttle userspace trying to shove in more submits
> >>> until some of the enqueued ones have a chance to run and complete.
> >>
> >> Odd I did not think trylock could trigger that. Looking at your code it
> >> indeed seems two trylocks. I am pretty sure we use the same trylock
> >> trick to avoid it. I am confused..
> >
> > The sequence is,
> >
> > 1. kref_get_unless_zero()
> > 2. trylock, which succeeds
> > 3. attempt to evict or purge (which may or may not have succeeded)
> > 4. unlock
> >
> >   ... meanwhile this has raced with submit (aka execbuf) finishing and
> > retiring and dropping *other* remaining reference to bo...
> >
> > 5. drm_gem_object_put() which triggers drm_gem_object_free()
> > 6. in our free path we acquire the obj lock again and then drop it.
> > Which arguably is unnecessary and only serves to satisfy some
> > GEM_WARN_ON(!msm_gem_is_locked(obj)) in code paths that are also used
> > elsewhere
> >
> > lockdep doesn't realize the previously successful trylock+unlock
> > sequence so it assumes that the code that triggered recursion into
> > shrinker could be holding the objects lock.
>
> Ah yes, missed that lock after trylock in msm_gem_shrinker/scan(). Well
> i915 has the same sequence in our shrinker, but the difference is we use
> delayed work to actually free, _and_ use trylock in the delayed worker.
> It does feel a bit inelegant (objects with no reference count which
> cannot be trylocked?!), but as this is the code recently refactored by
> Maarten so I think best try and sync with him for the full story.

ahh, we used to use delayed work for free, but realized that was
causing janks where we'd get a bunch of bo's queued up to free and at
some point that would cause us to miss deadlines

I suppose instead we could have used an unbound wq for free instead of
the same one we used (at the time, since transitioned to kthread
worker to avoid being preempted by RT SF threads) for retiring submits

> >> Otherwise if you can afford to sleep you can of course throttle
> >> organically via direct reclaim. Unless I am forgetting some key gotcha -
> >> it's been a while I've been active in this area.
> >
> > So, one thing that is awkward about sleeping in this path is that
> > there is no way to propagate back -EINTR, so we end up doing an
> > uninterruptible sleep in something that could be called indirectly
> > from userspace syscall.. i915 seems to deal with this by limiting it
> > to shrinker being called from kswapd.  I think in the shrinker we want
> > to know whether it is ok to sleep (ie. not syscall trigggered
> > codepath, and whether we are under enough memory pressure to justify
> > sleeping).  For the syscall path, I'm playing with something that lets
> > me pass __GFP_RETRY_MAYFAIL | __GFP_NOWARN to
> > shmem_read_mapping_page_gfp(), and then stall after the shrinker has
> > failed, somewhere where we can make it interruptable.  Ofc, that
> > doesn't help with all the other random memory allocations which can
> > fail, so not sure if it will turn out to be a good approach or not.
> > But I guess pinning the GEM bo's is the single biggest potential
> > consumer of pages in the submit path, so maybe it will be better than
> > nothing.
>
> We play similar games, although by a quick look I am not sure we quite
> manage to honour/propagate signals. This has certainly been a
> historically fiddly area. If you first ask for no reclaim allocations
> and invoke the shrinker manually first, then falling back to a bigger
> hammer, you should be able to do it.

yeah, I think it should.. but I've been fighting a bit today with the
fact that the state of bo wrt. shrinkable state has grown a bit
complicated (ie. is it purgeable, evictable, evictable if we are
willing to wait a short amount of time, vs things that are pinned for
scanout and we shouldn't bother waiting on, etc.. plus I managed to
make it a bit worse recently with fenced un-pin of the vma for dealing
with the case that userspace notices that, for userspace allocated
iova, it can release the virtual address before the kernel has a
chance to retire the submit) ;-)

BR,
-R

> Regards,
>
> Tvrtko



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [Linux for Sparc]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux