On Wed, Feb 26, 2020 at 11:23 PM Gerd Hoffmann <kraxel@xxxxxxxxxx> wrote: > > On Wed, Feb 26, 2020 at 04:25:53PM -0800, Gurchetan Singh wrote: > > The main motivation behind this is to have eventually have something like this: > > > > struct virtio_gpu_shmem { > > struct drm_gem_shmem_object base; > > uint32_t hw_res_handle; > > struct sg_table *pages; > > (...) > > }; > > > > struct virtio_gpu_vram { > > struct drm_gem_object base; // or *drm_gem_vram_object > > uint32_t hw_res_handle; > > {offset, range}; > > (...) > > }; > > Given that we probably will not use drm_gem_vram_object Makes sense not to use drm_gem_vram_object for now. > and > drm_gem_shmem_object->base is drm_gem_object I think we can > go this route: > > struct virtgpu_object { Yeah, using "virtgpu_" rather than "virtio_gpu" makes sense. A bit less wordy, though the current code is based on "virtio_gpu". > struct drm_gem_shmem_object base; > enum object_type; > uint32_t hw_res_handle; > [ ... ] > }; > > struct virtgpu_object_shmem { > struct virtgpu_object base; > struct sg_table *pages; > [ ... ] > }; > > struct virtgpu_object_hostmem { > struct virtgpu_object base; > {offset, range}; > (...) I'm a kernel newbie, so it's not obvious to me why struct drm_gem_shmem_object would be a base class for struct virtgpu_object_hostmem? The core utility of drm_gem_shmem_helper seems to get pages, pinning pages, and releasing pages. But with host-mem, we won't have an array of pages, but just an (offset, length) -- which drm_gem_shmem_helper function is useful here? Side question: is drm_gem_object_funcs.vmap(..) / drm_gem_object_funcs.vunmap(..) even possible for hostmem? P.S: The proof of concept hostmem implementation is on Gitlab [1][2]. Some notes: - io_remap_pfn_range to get a userspace mapping - calls drm_gem_private_object_init(..) rather than drm_gem_object_init [which sets up shmemfs backing store]. [1] https://gitlab.freedesktop.org/virgl/drm-misc-next/-/blob/virtio-gpu-next/drivers/gpu/drm/virtio/virtgpu_drv.h#L80 [2] https://gitlab.freedesktop.org/virgl/drm-misc-next/-/blob/virtio-gpu-next/drivers/gpu/drm/virtio/virtgpu_hostmem.c#L179 > }; > > Then have helpers to get virtgpu_object_hostmem / virtgpu_object_shmem > from virtgpu_object via container_of which also sanity-check > object_type (maybe we can check drm_gem_object->ops for that instead of > adding a new field). > > > Sending this out to solicit feedback on this approach. Whichever approach > > we decide, landing incremental changes to internal structures is reduces > > rebasing costs and avoids mega-changes. > > That certainly makes sense. > > cheers, > Gerd > _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel