On Tue, Jul 14, 2015 at 3:45 AM, Andrew Chew <achew@xxxxxxxxxx> wrote: > I apologize for my ignorance. In digging through nouveau, I've become > a bit confused regarding the relationship between virtual address > allocations and nouveau bo's. > > From my reading of the code, it seems that a nouveau_bo really > encapsulates a buffer (whether imported, or allocated within nouveau like, > say, pushbuffers). So I'm confused about an earlier statement that to > allocate a chunk of address space, I have to create a nouveau_bo for it. It is the case right now because there is no mean for user-space to manipulate the GPU address space without having a nouveau_bo. So both are closely related. But if you implement the address space reservation ioctl, a nouveau_bo will not be required until you want to back that space with actual memory. > What I really want to do is reserve some space in the address allocator > (the stuff in nvkm/subdev/mmu/base.c). Note that there are no buffers > at this time. This is just blocking out some chunk of the address space > so that normal address space allocations (for, say, bo's) avoid this region. > > At some point after that, I'd like to import a buffer, and map it to > certain regions of my pre-allocated address space. This is why I can't > go through the normal path of importing a buffer...that path assumes there > is no address for this buffer, and tries to allocate one. In our case, > we already have an address in mind. Naively, at this point, I'd like to > create a nouveau_bo for this imported buffer, but not have it go through > the address allocator and instead just take a fixed address. I think our main issue is that (someone correct me if I am wrong) Nouveau will automatically create a GPU mapping when a buffer is imported through PRIME. If we can (1) prevent this from happening (or, less ideally, re-map the imported buffer afterwards), and (2) perform the mapping by ourselves, we should be good. For the sake of completeness, we should also solve that same issue for buffers created using the NOUVEAU_GEM_NEW ioctl. I am not sure how we can make (1) happen. Surely we cannot change the semantics of DRM_IOCTL_PRIME_FD_TO_HANDLE without breaking user-space. But maybe we can delay that automatic mapping, and only make it happen if no manual mapping has been performed in-between? This would leave us a window right after the object is imported to decide its GPU address, which is precisely what we need. For objects created using NOUVEAU_GEM_NEW, things might be as simple as adding a "do not map yet" flag. Regarding (2), I kind of feel like this is related to another issue we were having with imported buffers: that we have no way to specify their tiling options, contrary to buffers created with NOUVEAU_GEM_NEW. I had a pretty lame attempt at fixing this last point (http://lists.freedesktop.org/archives/dri-devel/2015-May/083052.html ), but it has been rejected, and probably for the best now that I think of it. Tiling and offset inside the GPU VM are both properties of a buffer, so why not handle them both using the same ioctl? We currently have DRM_NOUVEAU_GEM_INFO, which returns all these properties to user-space (see struct drm_nouveau_gem_info). How about introducing DRM_NOUVEAU_GEM_SET_INFO that would allow to change these properties, i.e. to change the tiling flags (which the tiling ioctl attempted to do), but also the mapping address if it is specified and valid? So in order to import a buffer at a fixed GPU address, after reserving a portion of the GPU VM for that purpose, one would: 1) use DRM_IOCTL_PRIME_FD_TO_HANDLE to import the buffer 2) invoke DRM_NOUVEAU_GEM_SET_INFO to map (or re-map if 1) already created a mapping) the buffer to the right address I suspect this proposal is full of flaws though, so feel free to shoot it down. :) _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel