Re: [PATCH v3 3/8] drm/i915: Partition the fence registers for vGPU in i915 driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 12/17/2014 11:25 AM, Yu, Zhang wrote:
On 12/17/2014 7:06 PM, Gerd Hoffmann wrote:
   Hi,

It's not possible to allow guests direct access to the fence registers
though.  And if every fence register access traps into the hypervisor
anyway the hypervisor can easily map the guest virtual fence to host
physical fence, so there is no need to tell the guest which fences it
owns, the number of fences is enough.

That exactly is the part I don't understand - if it is not required to
tell the guest which fences it owns, why it is required to say how many?

There is a fixed assignment of fences to guests, so it's a fixed number.
But as the hypervisor is involved in any fence access anyway there is no
need for the guest to know which of the fences it owns, the hypervisor
can remap that transparently for the guest, without performance penalty.
Thanks Gerd. Exactly.
Although fence registers are parititioned to vGPU, it is not necessary
for a vGPU to know the physical mmio addresses of the allocated fence
registers.
For example, vGPU 1 with fence size 4 can access the fence registers
from 0x100000-10001f; at the same time, vGPU 2 with fence size 8 can
access the fence registers from 0x100000-0x10003f. Although this seems
conflicting, it does not matter. Because these mmio addresses are all
supposed to be trapped in the host side, which will keep a record of the
real fence offset of different vGPUs(say 0 for vGPU 1 and 4 for vGPU 2),
and then does the remapping. Therefore, the physical operations on the
fence register will be performed by host code on different ones(say,
0x100000-10001fh for vGPU 1 and 0x100020-0x10005f for vGPU 2).

Okay, I think I get it now. What I had in mind is not really possible without a dedicated hypervisor<->guest communication channel. Or in other words you would have to extend the way i915 allocates them from mmio writes to something bi-directional.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/intel-gfx





[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux