On Sun, Jan 08, 2012 at 05:56:31PM -0500, Jerome Glisse wrote: > On Sun, Jan 8, 2012 at 9:05 AM, Daniel Vetter <daniel@xxxxxxxx> wrote: > > Hi all, > > > > Meh, I've wanted to port the small set of helpers nouveau already has to > > handle per open fd gpu virtual address spaces to core drm, so that I could > > reuse them for i915. Just to go one small step towards unifying drivers in > > drm/* a bit ... > > > > Looks like I'll have another driver to wrestle or just forget about it and > > reinvet that wheel for i915, too. > > > > </slight rant> > > > > Cheers, Daniel > > -- > > I looked at nouveau before writing this code, thing is, in the end > there is little common code especialy when you take different path on > how you handle things (persistent or dynamic page table for instance). > Thought couple things can still be share. Note that the whole radeon > code is designed with the possibility of having several address space > per process, thought there is no use for such things today we believe > things like opencl+opengl can benefit of each having their own address > space. - I've realized when looking through nouveau that we likely can't share match more than a gem_bo->vma lookup plus a bunch of helper functions. - Imo having more than one gpu virtual address space per fd doesn't make much sense. libdrm (at least for i915) is mostly just about issueing cmdbuffers. Hence it's probably easier to just open two fds and instantiate two libdrm buffer managers if you want two address spaces for otherwise you have to teach libdrm that the same buffer object still can have different addresses (which is pretty much against the point of gpu virtual address spaces). I also realize that in the dri1 days there's been way too much common code that only gets used by one or two drivers and hence isn't really commonly useable at all (and also not really of decent quality). So I'm all in favour for driver-specific stuff, especially for execution and memory management. But: - nouveau already has gpu virtual address spaces, radeon just grew them with this patch and i915 is on track to get them, too: Patches to enable the different hw addressing mode for Sandybridge and later are ready, and with Ivybridge hw engineers kinked out the remaining bugs so we can actually context-switch between different address spaces without hitting hw bugs. - The more general picture is that with the advent of more general-purpose apis and usecases for gpus like opencl (or also background video encoding/decoding/transcoding with libva) users will want to control gpu resources. So I expect that we'll grow resource limits, schedulers with priorities and maybe also something like control groups in a few years. But if we don't put a bit of thought into the commonalities of things like gpu virtual address spaces, scheduling and similar things I fear we won't be able to create a sensible common interface to allocate and control resources in the feature. Which will result in a sub-par experience. But if my google-fu doesn't fail me gpu address spaces for radeon was posted the first time as v22 ever on a public list and merged right away, so there's been simply no time to discuss cross-driver issues. Which is part of why I'm slightly miffed ;-) Cheers, Daniel -- Daniel Vetter Mail: daniel@xxxxxxxx Mobile: +41 (0)79 365 57 48 _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel