On 23/06/2022 22:05, Zeng, Oak wrote:
-----Original Message-----
From: Intel-gfx <intel-gfx-bounces@xxxxxxxxxxxxxxxxxxxxx> On Behalf Of Tvrtko
Ursulin
Sent: June 23, 2022 7:06 AM
To: Landwerlin, Lionel G <lionel.g.landwerlin@xxxxxxxxx>; Vishwanathapura,
Niranjana <niranjana.vishwanathapura@xxxxxxxxx>
Cc: Zanoni, Paulo R <paulo.r.zanoni@xxxxxxxxx>; intel-gfx@xxxxxxxxxxxxxxxxxxxxx;
dri-devel@xxxxxxxxxxxxxxxxxxxxx; Hellstrom, Thomas <thomas.hellstrom@xxxxxxxxx>;
Wilson, Chris P <chris.p.wilson@xxxxxxxxx>; Vetter, Daniel
<daniel.vetter@xxxxxxxxx>; christian.koenig@xxxxxxx; Auld, Matthew
<matthew.auld@xxxxxxxxx>
Subject: Re: [Intel-gfx] [PATCH v3 3/3] drm/doc/rfc: VM_BIND uapi definition
On 23/06/2022 09:57, Lionel Landwerlin wrote:
On 23/06/2022 11:27, Tvrtko Ursulin wrote:
After a vm_unbind, UMD can re-bind to same VA range against an active
VM.
Though I am not sue with Mesa usecase if that new mapping is required
for
running GPU job or it will be for the next submission. But ensuring the
tlb flush upon unbind, KMD can ensure correctness.
Isn't that their problem? If they re-bind for submitting _new_ work
then they get the flush as part of batch buffer pre-amble.
In the non sparse case, if a VA range is unbound, it is invalid to use
that range for anything until it has been rebound by something else.
We'll take the fence provided by vm_bind and put it as a wait fence on
the next execbuffer.
It might be safer in case of memory over fetching?
TLB flush will have to happen at some point right?
What's the alternative to do it in unbind?
Currently TLB flush happens from the ring before every BB_START and also
when i915 returns the backing store pages to the system.
Can you explain more why tlb flush when i915 retire the backing storage? I never figured that out when I looked at the codes. As I understand it, tlb caches the gpu page tables which map a va to a pa. So it is straight forward to me that we perform a tlb flush when we change the page table (either at vm bind time or unbind time. Better at unbind time for performance reason).
I don't know what performs better - someone can measure the two
approaches? Certainly on platforms where we only have global TLB
flushing the cost is quite high so my thinking was to allow i915 to
control when it will be done and not guarantee it in the uapi if it
isn't needed for security reasons.
But it is rather tricky to me to flush tlb when we retire a backing storage. I don't see how backing storage can be connected to page table. Let's say user unbind va1 from pa1, then bind va1 to pa2. Then retire pa1. Submit shader code using va1. If we don't tlb flush after unbind va1, the new shader code which is supposed to use pa2 will still use pa1 due to the stale entries in tlb, right? The point is, tlb cached is tagged with virtual address, not physical address. so after we unbind va1 from pa1, regardless we retire pa1 or not, va1 can be bound to another pa2.
When you say "retire pa1" I will assume you meant release backing
storage for pa1. At this point i915 currently does do the TLB flush and
that ensures no PTE can point to pa1.
This approach deals with security of the system as a whole. Client may
still cause rendering corruption or a GPU hang for itself but that
should be completely isolated. (This is the part where you say
"regardless if we retire pa1 or not" I think.)
But I think those are advanced use cases where userspace wants to
manipulate PTEs while something is running on the GPU in parallel. AFAIK
limited to compute "infinite batch" so my thinking is to avoid adding a
performance penalty to the common case. Especially on platforms which
only have global flush.
But.. to circle back on the measuring angle. Until someone invests time
and effort to benchmark the two approaches (flush on unbind vs flush on
backing store release) we don't really know. All I know is the perf hit
with the current solution was significant, AFAIR up to teen digits on
some games. And considering the flushes were driven only by the shrinker
activity, my thinking was they would be less frequent than the unbinds,
therefore have the potential for a smaller perf hit.
Regards,
Tvrtko