> From: Chris Wilson > Sent: Friday, September 19, 2014 1:06 AM > > On Sat, Sep 20, 2014 at 02:47:02AM +0800, Jike Song wrote: > > From: Yu Zhang <yu.c.zhang@xxxxxxxxx> > > > > In XenGT, the global graphic memory space is partitioned by multiple > > vgpu instances in different VMs. The ballooning code is added in > > i915_gem_setup_global_gtt(), utilizing the drm mm allocator APIs to > > mark the graphic address space which are partitioned out to other > > vgpus as reserved. > > One special treatment for this scenario is that the guard page is > > added at the end of both aperture and non-aperture spaces. This is > > to follow the behavior pattern in native case. However here we have > > a question: does the prefetch issue mentioned at the begining of > > i915_gem_setup_global_gtt() happens for all the GTT entries, or just > > at then end of the BAR0 space? > > The CS prefetcher happens everywhere and so can read from the end of one > range into the beginning of another clients, so requires consideration > if not actualy a guard page. The very last entry in the whole GTT must > be a guard page (or allocated to something not prefetchable). So this only applies to the very last page of the whole GTT, right? for normal non-present GTT entries, suppose CS prefetcher should behave correctly since that can happen anywhere. If it's true, possibly we don't need reserve a guard page for each partition, but only reserve one for the last partition at end of the whole GTT. Or keep current way is also fine, which has the unified policy for all partitions, and avoid undesired effect (not sure any) due to bad setting of the 1st GTT entry in adjacent partition. > > > If all entries may have prefetch issues, > > then this special guard page is necessary, to protect unexpected > > accesses into GTT entries partitioned out by other VMs. Otherwise, > > we may only need one guard page at the end of the physical GTT space. > > I am a bit dubious how this works when userspace still believes that it > can access the whole mappable aperture, and then how every driver > attempts to pin its own planes, rings and whatnot (since it still > believes that it is talking to the actual hardware and that the hardware > requires access to its virtual address). The host should be able to move the > ranges around in order to accommodate userspace in any particular guest > (hence a balloon interface I presume). But I don't see how that is > possible, and you don't explain it either. for this we discussed with Jesse/Daniel earlier. In an ideal world user space should assume hardware resource knowledge, i.e. it should get all available resource from KMD. In such case once ballooning is completed, all the clients will only use allocated portion. However as you said now there's assumption in user space, which will fail in XenGT say assuming 256MB aperture but only have 64MB allocated. Jesse/Daniel discussed two ways to improve this situation, either add a lightweight page fault mechanism to catch the fault, or change mesa to remove the assumption. That can be an orthogonal effort with this patch set. for displays, we do have a problem for multi-monitor support, if framebuffer size exceeds allocated memory. But that's the same situation on bare metal, just the matter of different level of capability limitation. and our balloon is static, i.e. everything settled down in the driver load phase. dynamic ballooning is very complex, since the specific page may be referenced in kernel/user level, in different structures, etc. That's a good research topic, but let's pursue simple solution first. :-) > > The implementation also looks backwards. To work correctly with the GTT > allocator, you need to preallocate the reserved space such that it can > only allocate from the allowed ranges. Similarly, it should evict any > conflicting nodes when deballooning. Could you elaborate a bit for above suggestion? > -Chris > > -- > Chris Wilson, Intel Open Source Technology Centre > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@xxxxxxxxxxxxxxxxxxxxx > http://lists.freedesktop.org/mailman/listinfo/intel-gfx _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx