Re: [PATCH v3 3/8] drm/i915: Partition the fence registers for vGPU in i915 driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Dec 17, 2014 at 06:10:23PM +0100, Daniel Vetter wrote:
> On Wed, Dec 17, 2014 at 11:50:13AM +0000, Tvrtko Ursulin wrote:
> > 
> > On 12/17/2014 11:25 AM, Yu, Zhang wrote:
> > >On 12/17/2014 7:06 PM, Gerd Hoffmann wrote:
> > >>   Hi,
> > >>
> > >>>>It's not possible to allow guests direct access to the fence registers
> > >>>>though.  And if every fence register access traps into the hypervisor
> > >>>>anyway the hypervisor can easily map the guest virtual fence to host
> > >>>>physical fence, so there is no need to tell the guest which fences it
> > >>>>owns, the number of fences is enough.
> > >>>
> > >>>That exactly is the part I don't understand - if it is not required to
> > >>>tell the guest which fences it owns, why it is required to say how many?
> > >>
> > >>There is a fixed assignment of fences to guests, so it's a fixed number.
> > >>But as the hypervisor is involved in any fence access anyway there is no
> > >>need for the guest to know which of the fences it owns, the hypervisor
> > >>can remap that transparently for the guest, without performance penalty.
> > >Thanks Gerd. Exactly.
> > >Although fence registers are parititioned to vGPU, it is not necessary
> > >for a vGPU to know the physical mmio addresses of the allocated fence
> > >registers.
> > >For example, vGPU 1 with fence size 4 can access the fence registers
> > >from 0x100000-10001f; at the same time, vGPU 2 with fence size 8 can
> > >access the fence registers from 0x100000-0x10003f. Although this seems
> > >conflicting, it does not matter. Because these mmio addresses are all
> > >supposed to be trapped in the host side, which will keep a record of the
> > >real fence offset of different vGPUs(say 0 for vGPU 1 and 4 for vGPU 2),
> > >and then does the remapping. Therefore, the physical operations on the
> > >fence register will be performed by host code on different ones(say,
> > >0x100000-10001fh for vGPU 1 and 0x100020-0x10005f for vGPU 2).
> > 
> > Okay, I think I get it now. What I had in mind is not really possible
> > without a dedicated hypervisor<->guest communication channel. Or in other
> > words you would have to extend the way i915 allocates them from mmio writes
> > to something bi-directional.
> 
> You could virtualize fences the same way we virtualize fences for
> userspace for gtt mmap access: If we need to steal a fences we simply need
> to unmap the relevant gtt mmio range from the guest ptes. This should work
> well since on current platforms the only thing that really needs fences is
> cpu access, the gpu doesn't need them. Well except for some oddball cases
> in the display block, but those are virtualized anyway (not fbc for guests
> or anything else like that).
> 
> This would also fit a bit more closely with how the host manages fences,
> so benefiting the new kvm/xengt-on-i915 mode for the host instead of the
> current implementation which also virtualizes host i915 access cycles.

Btw this isn't a blocker for the current implementation since we can
always just tell guests that they can use all fences when we implement
this. So the current, simpler implementation isn't restricting us in any
meaningful way.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/intel-gfx





[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux