Re: [PATCH 07/13 v4] drm/i915: GuC submission setup, phase 1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 28/07/15 00:12, O'Rourke, Tom wrote:
On Mon, Jul 27, 2015 at 03:41:31PM -0700, Yu Dai wrote:

On 07/24/2015 03:31 PM, O'Rourke, Tom wrote:
[TOR:] When I see "phase 1" I also look for "phase 2".
A subject that better describes the change in this patch
would help.

On Thu, Jul 09, 2015 at 07:29:08PM +0100, Dave Gordon wrote:
From: Alex Dai <yu.dai@xxxxxxxxx>

This adds the first of the data structures used to communicate with the
GuC (the pool of guc_context structures).

We create a GuC-specific wrapper round the GEM object allocator as all
GEM objects shared with the GuC must be pinned into GGTT space at an
address that is NOT in the range [0..WOPCM_SIZE), as that range of GGTT
addresses is not accessible to the GuC (from the GuC's point of view,
it's permanently reserved for other objects such as the BootROM & SRAM).
[TOR:] I would like a clarfication on the excluded range.
The excluded range should be 0 to "size for guc within
WOPCM area" and not 0 to "size of WOPCM area".

Nope, GGTT range [0..WOPCM_SIZE) should be excluded from GuC usage.
BSpec clearly says, from 0 to WOPCM_TOP-1 is for BootROM, SRAM and
WOPCM. From WOPCM_TOP and above is GFX DRAM. Be note that, that GGTT
space is still available to any gfx obj as long as it is not
accessed by GuC (OK to pass through GuC).

[TOR:] Should we take a closer look at the pin offset bias
for guc objects?  GUC_WOPCM_SIZE_VALUE is not the full size
of WOPCM area.

I'm inclined to set the bias to GUC_WOPCM_TOP, and then define that as the sum of GUC_WOPCM_OFFSET_VALUE and GUC_WOPCM_SIZE_VALUE. That seems to be what the BSpec pages "WriteOnceProtectedContentMemory (WOPCM) Management" and "WOPCM Memory Map" suggest, although I think they're pretty unclear on the details :(

Do you (both) agree this would be the right value?

[snip]

+	/* If GuC scheduling is enabled, setup params here. */
[TOR:] I assume from this "GuC scheduling" == "GuC submission".
This is a little confusing.  I recommend either reword
"GuC scheduling" or add comment text to explain.

For now, yes the GuC scheduling is only doing the 'submission' work
because of the current kernel in-order queue. If we have client
direct submission enabled, there WILL BE GuC scheduling inside
firmware according to the priority of each queue etc.

Thanks,
Alex

I changed the line above to "GuC submission", while the one a few lines further down now says:

	/* Unmask this bit to enable the GuC's internal scheduler */

to make it quite clear that we're not referring to the host-based GPU scheduler curently in review.

.Dave.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux