On Tue, 13 Feb 2018 21:01:04 +0100, Yaodong Li <yaodong.li@xxxxxxxxx>
wrote:
On 02/13/2018 07:56 AM, Michal Wajdeczko wrote:
+
+/* GUC WOPCM offset needs to be 16KB aligned. */
+#define GUC_WOPCM_OFFSET_ALIGNMENT (16 << 10)
+/* 16KB reserved at the beginning of GuC WOPCM. */
+#define GUC_WOPCM_RESERVED (16 << 10)
+/* 8KB from GUC_WOPCM_RESERVED is reserved for GuC stack. */
+#define GUC_WOPCM_STACK_RESERVED (8 << 10)
+/* 24KB at the end of GuC WOPCM is reserved for RC6 CTX on BXT. */
+#define BXT_GUC_WOPCM_RC6_CTX_RESERVED (24 << 10)
Hmm, I'm still not convinced that we correctly attach it to "GuC WOPCM".
Can you explain the reason and more about your concerns?
Spec says that CTX reserved region is separate from GuC region and is
calculated "against the address of the top of WOPCM".
In the result, maybe we should stop focusing on "intel_guc_wopcm" and
define
everything as "intel_wopcm" as it was earlier suggested by Joonas.
Then we can define our struct to match diagram above as
struct intel_wopcm {
u32 size;
struct {
u32 base;
u32 size;
} guc;
};
Thanks Michal! but I don't think this is quite clean design. two reason:
0) I agree there should be something like intel_wopcm to describe
the overall wopcm info, that what I did in my v1 patch series.
the question is whether we really need it even if we are using
only static wopcm size value?
Are you sure that WOPCM size is same for all platforms ?
I think it should be more clear to
introduce intel_wopcm as a part of a patch that supports dynamic
wopcm size calculation.
This is exactly what I'm suggesting. Note that spec uses "arbiter" term,
and we can use struct intel_wopcm as placeholder for the partitioning
results.
2) the wopcm region (a.k.a partition) is definitely a concept that should
exist at least in uc layer.
I'm not so sure. But definitely it should not be guc only concept.
if we chose not to init uc/guc, it would be
nonsense to init these partitions/info in an intel_wopcm which
attached to
drm_i915_private and we have initialized a part of this struct
(e.g. size).
Why nonsense? Since WOPCM is used/needed by several entities (agents)
then it is obvious that to properly partition wopcm into regions that
will satisfy all agents, arbiter need to know their specific requirements
(like fw sizes in case of HuC or GuC).
to make it clean I will insist to have the guc wopcm region (or
maybe
the other region info) kept in uc level.
It will not be clean if you make calculation in one place and keep partial
results in other places. We can at most setup hw from uc code.
As I said the purpose of this series is to enable dynamic GuC WOPCM
offset
and size calculation.
Sure. but and all I want is to do it in a right place.
it's not targeting any code refactoring and only serves
as a new feature enabling patch. The design principle of this series was
to
provide clear new feature enabling by:
1) align with current code design and implementation.
2) provide correct hardware abstraction.
3) faultlessly enabled these new features. (e.g. dynamic size/offset
calculation)
I think this series is now in a good shape to meet all above targets.
So we need arbiter :)
On the other hand, I do agree there always is some room to make our
current
code clearer, but what we are talking about is further code refactoring.
Actually, I've already had some ideas of this. e.g. we could have some
new
abstractions such as:
struct intel_wopcm {
u32 size;
/*any other global wopcm info*/
}
struct wopcm_region {
u32 reserved; // reserved size in each region
u32 offset; // offset of each region
u32 size; // size of each region
};
struct intel_uc {
struct wopcm_regions regions[];
struct intel_uc_fw fws[];
struct intel_guc guc;
...
}
struct intel_guc_dma {
u32 fw_domains;
lockable_reg size;
lockable_reg offset;
u32 flags; // e.g. loading HuC.
}
guc_dma_init()
guc_dma_hw_init()
guc_dma_xfer()
struct intel_guc {
struct intel_guc_dma guc_dma;
/*other guc objects*/
}
This would provide better software layer abstraction. but what I learned
from the 10 versions code submission is we need make things clear enough
to
move forward. The lack of uc level abstraction is probably the reason
why we
felt so frustrating about this part of code.
Yep. struct intel_uc will likely make few things cleaner.
Can we just move forward by aligning to the current code implementation?
since what we need now is enable this new features. and we definitely can
provide more code refactoring patches based on these changes later.
This is question to the maintainers.
Regards,
-Jackie
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx