> From: Daniel Vetter > Sent: Thursday, October 23, 2014 8:10 PM > > On Thu, Oct 23, 2014 at 01:01:28PM +0200, Gerd Hoffmann wrote: > > Hi, > > > > > Stuf like driver load/unload, suspend/resume, runtime pm and gpu reset > are > > > already supre-fragile as-is. Every time we change something in there, a > > > bunch of related things fall apart. With vgt we'll have even more > > > complexity in there, and I really think we need to make that complexity > > > explicit. Otherwise we'll always break vgt support for host systems by > > > accident when working on upstream. So in my experience being explicit > with > > > these depencies massively reduces maintaince headaches longterm. > > > > I think that makes sense. vGT can leave alot of the lowlevel work such > > as power management to i915 then, so we don't duplicate that code in vGT > > and i915. > > It's not just the duplication, but interactions. And like I've said those > are already making things in the i915 really messy. And then there's also > all the new features like gpu scheduling, runtime pm and all that which > we're constantly adding. Keeping up the appearance that i915 is in control > on the host side but actually isn't would fall appart rather quickly I > fear. yes, for duplication we also try to avoid in original approach, e.g. when talking about PM or reset, we leverage the i915 interface but just adding vgt hooks there to do vgt specific task. I believe this won't change even with new proposed approach. the actually thing making difference is the interception of all MMIO/GTT accesses, which separates vgt from i915 in very low level, with the limitation like Daniel mentioned which can be better maintained if exposing in a higher level. > > > Stacking vGT on top of i915 also makes it alot easier to have it a > > runtime option (just an additional kernel module you load when you need > > it). Other way around vGT would be a hard dependency for i915, even if > > you don't use it (you could compile it out probably, but distro kernels > > can't do that if they want support vGT). > > We probably need to make a few changes to i915, maybe on the command > submission side. But those should definitely be small enough that we don't > need a compile time option for them. Furthermore the command submission > code is getting rearchitected now (due to other features), so perfect time > to make sure vgt will work, too. we'll think about how to avoid low level interception, instead explicitly spelling out vgt requirement within high level interface. But I don't expect this to be simple vgt-on-top-of-i915 model, i.e. vgt just becomes a caller of i915. There's gonna various places where we need vgt hooks within the high level interfaces to make things correct, like PM, reset, command submission, interrupt, etc. > > > Another note: Right now the guest display can only be sent to one of the > > crtcs. Long term I want more options there, such as exporting the guest > > display as dma-buf on the host so it can be blitted to some window. Or > > even let the gpu encode the guest display as video, so we can send it > > off over the network. I suspect that is also easier to implement if > > i915 manages all resources and vGT just allocates from i915 what it > > needs for the guest. > > Yeah that should be really simple to add, we already have abstraction for > the different kinds of buffer objects (native shmem backed gem object, > carveout/stolen mem, dma-buf imported, userptr). We have internal patch for such feature, allowing an user space agent to map VM's framebuffer and then composite for random effects. It's not included now as not a core feature. :-) > > I guess from a really high level this boils down to having a xen-like > design (where the hypervisor and dom0 driver are separate, but cooperate > somewhat) or kvm (where the virtualization sits on top of a normal > kernel). Afaics the kvm model seems to have a lot more momentum overall. right, that's part of the story, and why we initially even have vgt as a separate kernel module. Our goal is to have a single design working for both Xen/KVM, by keeping core logic in i915, glued w/ a shim driver provided by individual hypervisor. Thanks Kevin _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx