Re: [Intel-gfx] [RFC PATCH 00/97] Basic GuC submission support in the i915

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 25, 2021 at 11:32:26AM +0100, Tvrtko Ursulin wrote:
> 
> On 06/05/2021 20:13, Matthew Brost wrote:
> > Basic GuC submission support. This is the first bullet point in the
> > upstreaming plan covered in the following RFC [1].
> > 
> > At a very high level the GuC is a piece of firmware which sits between
> > the i915 and the GPU. It offloads some of the scheduling of contexts
> > from the i915 and programs the GPU to submit contexts. The i915
> > communicates with the GuC and the GuC communicates with the GPU.
> > 
> > GuC submission will be disabled by default on all current upstream
> > platforms behind a module parameter - enable_guc. A value of 3 will
> > enable submission and HuC loading via the GuC. GuC submission should
> > work on all gen11+ platforms assuming the GuC firmware is present.
> > 
> > This is a huge series and it is completely unrealistic to merge all of
> > these patches at once. Fortunately I believe we can break down the
> > series into different merges:
> > 
> > 1. Merge Chris Wilson's patches. These have already been reviewed
> > upstream and I fully agree with these patches as a precursor to GuC
> > submission.
> > 
> > 2. Update to GuC 60.1.2. These are largely Michal's patches.
> > 
> > 3. Turn on GuC/HuC auto mode by default.
> > 
> > 4. Additional patches needed to support GuC submission. This is any
> > patch not covered by 1-3 in the first 34 patches. e.g. 'Engine relative
> > MMIO'
> > 
> > 5. GuC submission support. Patches number 35+. These all don't have to
> > merge at once though as we don't actually allow GuC submission until the
> > last patch of this series.
> 
> For the GuC backend/submission part only - it seems to me none of my review
> comments I made in December 2019 have been implemented. At that point I

I wouldn't say none of the fixes have done, lots have just not
everything you wanted.

> stated, and this was all internally at the time mind you, that I do not
> think the series is ready and there were several high level issues that
> would need to be sorted out. I don't think I gave my ack or r-b back then
> and the promise was a few things would be worked on post (internal) merge.
> That was supposed to include upstream refactoring to enable GuC better
> slotting in as a backed. Fast forward a year and a half later and the only
> progress we had in this area has been deleted.
> 
> From the top of my head, and having glanced the series as posted:
> 
>  * Self-churn factor in the series is too high.

Not sure what you mean by this? The patches have been reworked
internally too much?

>  * Patch ordering issues.

We are going to clean up some of the ordering as these 97 patches are
posted in smaller mergeable series but at the end of the day this is a
bit of a bikeshed. GuC submission can't be turned until patch 97 so IMO
it really isn't all that big of a deal the order of which patches before
that land as we are not breaking anything.

>  * GuC context state machine is way too dodgy to have any confidence it can
> be read and race conditions understood.

I know you don't really like the state machine but no other real way to
not have DoS on resources and no real way to fairly distribute guc_ids
without it. I know you have had other suggestions here but none of your
suggestions either will work or they are no less complicated in the end.

For what it is worth, the state machine will get simplified when we hook
into the DRM scheduler as won't have to deal with submitting from IRQ
contexts in the backend or having more than 1 request in the backend at
a time.

>  * Context pinning code with it's magical two adds, subtract and cmpxchg is
> dodgy as well.

Daniele tried to remove this and it proved quite difficult + created
even more races in the backend code. This was prior to the pre-pin and
post-unpin code which makes this even more difficult to fix as I believe
these functions would need to be removed first. Not saying we can't
revisit this someday but I personally really like it - it is a clever
way to avoid reentering the pin / unpin code while asynchronous things
are happening rather than some complex locking scheme. Lastly, this code
has proved incredibly stable as I don't think we've had to fix a single
thing in this area since we've been using this code internally.

>  * Kludgy way of interfacing with rest of the driver instead of refactoring
> to fit (idling, breadcrumbs, scheduler, tasklets, ...).
>

Idling and breadcrumbs seem clean to me. Scheduler + tasklet are going
away once the DRM scheduler lands. No need rework those as we are just
going to rework this again.
 
> Now perhaps the latest plan is to ignore all these issues and still merge,
> then follow up with throwing it away, mostly or at least largely, in which
> case there isn't any point really to review the current state yet again. But
> it is sad that we got to this state. So just for the record - all this was
> reviewed in Nov/Dec 2019. By me among other folks and I at least deemed it
> not ready in this form.
> 

I personally don't think it is really in that bad of shape. The fact
that I could put together a PoC more or less fully integrating this
backend into the DRM scheduler within a few days I think speaks to the
quality and flexablitiy of this backend compared to execlists.

Matt 

> Regards,
> 
> Tvrtko



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux