Re: [PATCH 3/5] drm: Add infrastructure for vmap operations of I/O memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 30, 2020 at 10:14:43AM +0200, Thomas Zimmermann wrote:
> Hi
> 
> Am 29.07.20 um 15:57 schrieb daniel@xxxxxxxx:
> > On Wed, Jul 29, 2020 at 03:41:46PM +0200, Thomas Zimmermann wrote:
> >> Most platforms allow for accessing framebuffer I/O memory with regular
> >> load and store operations. Some platforms, such as sparc64, require
> >> the use of special instructions instead.
> >>
> >> This patch adds vmap_iomem to struct drm_gem_object_funcs. The new
> >> interface drm_client_buffer_vmap_iomem() gives DRM clients access to the
> >> I/O memory buffer. The semantics of struct drm_gem_objcet_funcs.vmap
> >> change slightly. It used to return system or I/O memory. Now it is
> >> expected to return memory addresses that can be accessed with regular
> >> load and store operations. So nothing changes for existing implementations
> >> of GEM objects. If the GEM object also implements vmap_iomem, a call
> >> to vmap shall only return system memory, even if I/O memory could be
> >> accessed with loads and stores.
> >>
> >> The existing interface drm_client_buffer_vmap() shall only return memory
> >> as given by drm_gem_vmap ((i.e., that is accessible via regular load and
> >> store). The new interface drm_client_buffer_vmap_iomem() shall only
> >> return I/O memory.
> >>
> >> DRM clients must map buffers by calling drm_client_buffer_vmap_iomem()
> >> and drm_client_buffer_vmap() to get the buffer in I/O or system memory.
> >> Each function returns NULL if the buffer is in the other memory area.
> >> Depending on the type of the returned memory, clients must access the
> >> framebuffer with the appropriate operations.
> >>
> >> Signed-off-by: Thomas Zimmermann <tzimmermann@xxxxxxx>
> > 
> > Hm I don't think this works, since for more dynamic framebuffers (like
> > real big gpu ttm drivers) this is a dynamic thing, which can change every
> > time we do an mmap. So I think the ttm approach of having an is_iomem flag
> > is a lot better.
> > 
> > The trouble with that is that you don't have correct checking of sparse
> > mappings, but oh well :-/ The one idea I've had to address that is using
> > something like this
> > 
> > typedef dma_buf_addr_t {
> > 	bool is_iomem;
> > 	union {
> > 		void __iomem *vaddr_iomem;
> > 		void vaddr;
> > 	};
> > };
> > 
> > And then having a wrapper for memcpy_from_dma_buf_addr and
> > memcpy_to_dma_buf_addr, which switches between memcpy and memcpy_from/toio
> > depending upon the is_iomem flag.
> > 
> > But it's a lot more invasive unfortunately :-/
> 
> What do you think about introducing read and write callbacks for GEM
> objects? Like this:
> 
>   int drm_gem_read(struct drm_gem_object *gbo, size_t off, size_t len,
> void *buf);
> 
>   int drm_gem_write(struct drm_gem_object *gbo, size_t off, size_t len,
> const void *buf);
> 
> The common case would by memcpy, but GEM implementations could provide
> their own thing. The fbdev blit function would look like
> 
>   vaddr = drm_gem_vmap(gbo)
>   if (IS_ERR(vaddr))
>     return
> 
>   for (each line) {
>     drm_gem_write(gbo, gbo_line_offset, line_size, src)
>     gbo_line_offset = /* next line */
>     src = /* next line */
>   }
> 
>   drm_gem_vunmap(gbo);
> 
> The whole mess about I/O access would be self-contained.

Copying the irc discussion over: We've had that idea floating around years
ago, i915-gem even implemented in the form of pwrite/pread for usersapce.
But now all userspace moved over to mmap, so read/write has fallen out of
favour.

I'm also not sure whether we really need to fix more than just fbcon on
fbdev on drm emulation, and it feels a bit silly to add read/write just
for that. Also the is_iomem flag on the vmap (and maybe eventually on
mmap, no idea) might be able to let us fix this at least for real
eventually.

Cheers, Daniel

> 
> Best regards
> Thomas
> 
> > -Daniel
> > 
> >> ---
> >>  drivers/gpu/drm/drm_client.c   | 52 ++++++++++++++++++++++++++++++++--
> >>  drivers/gpu/drm/drm_gem.c      | 19 +++++++++++++
> >>  drivers/gpu/drm/drm_internal.h |  1 +
> >>  include/drm/drm_client.h       |  8 +++++-
> >>  include/drm/drm_gem.h          | 17 +++++++++--
> >>  5 files changed, 91 insertions(+), 6 deletions(-)
> >>
> >> diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
> >> index 495f47d23d87..b5bbe089a41e 100644
> >> --- a/drivers/gpu/drm/drm_client.c
> >> +++ b/drivers/gpu/drm/drm_client.c
> >> @@ -327,6 +327,46 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
> >>  }
> >>  EXPORT_SYMBOL(drm_client_buffer_vmap);
> >>  
> >> +/**
> >> + * drm_client_buffer_vmap_iomem - Map DRM client buffer into address space
> >> + * @buffer: DRM client buffer
> >> + *
> >> + * This function maps a client buffer into kernel address space. If the
> >> + * buffer is already mapped, it returns the mapping's address.
> >> + *
> >> + * Client buffer mappings are not ref'counted. Each call to
> >> + * drm_client_buffer_vmap() should be followed by a call to
> >> + * drm_client_buffer_vunmap(); or the client buffer should be mapped
> >> + * throughout its lifetime.
> >> + *
> >> + * Returns:
> >> + *	The mapped memory's address
> >> + */
> >> +void __iomem *drm_client_buffer_vmap_iomem(struct drm_client_buffer *buffer)
> >> +{
> >> +	void __iomem *vaddr_iomem;
> >> +
> >> +	if (buffer->vaddr_iomem)
> >> +		return buffer->vaddr_iomem;
> >> +
> >> +	/*
> >> +	 * FIXME: The dependency on GEM here isn't required, we could
> >> +	 * convert the driver handle to a dma-buf instead and use the
> >> +	 * backend-agnostic dma-buf vmap support instead. This would
> >> +	 * require that the handle2fd prime ioctl is reworked to pull the
> >> +	 * fd_install step out of the driver backend hooks, to make that
> >> +	 * final step optional for internal users.
> >> +	 */
> >> +	vaddr_iomem = drm_gem_vmap_iomem(buffer->gem);
> >> +	if (IS_ERR(vaddr_iomem))
> >> +		return vaddr_iomem;
> >> +
> >> +	buffer->vaddr_iomem = vaddr_iomem;
> >> +
> >> +	return vaddr_iomem;
> >> +}
> >> +EXPORT_SYMBOL(drm_client_buffer_vmap_iomem);
> >> +
> >>  /**
> >>   * drm_client_buffer_vunmap - Unmap DRM client buffer
> >>   * @buffer: DRM client buffer
> >> @@ -337,8 +377,16 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
> >>   */
> >>  void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
> >>  {
> >> -	drm_gem_vunmap(buffer->gem, buffer->vaddr);
> >> -	buffer->vaddr = NULL;
> >> +	drm_WARN_ON(buffer->client->dev, buffer->vaddr && buffer->vaddr_iomem);
> >> +
> >> +	if (buffer->vaddr) {
> >> +		drm_gem_vunmap(buffer->gem, buffer->vaddr);
> >> +		buffer->vaddr = NULL;
> >> +	}
> >> +	if (buffer->vaddr_iomem) {
> >> +		drm_gem_vunmap(buffer->gem, (void *)buffer->vaddr_iomem);
> >> +		buffer->vaddr_iomem = NULL;
> >> +	}
> >>  }
> >>  EXPORT_SYMBOL(drm_client_buffer_vunmap);
> >>  
> >> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> >> index a57f5379fc08..a001be8c0965 100644
> >> --- a/drivers/gpu/drm/drm_gem.c
> >> +++ b/drivers/gpu/drm/drm_gem.c
> >> @@ -1227,6 +1227,25 @@ void *drm_gem_vmap(struct drm_gem_object *obj)
> >>  		vaddr = obj->funcs->vmap(obj);
> >>  	else if (obj->dev->driver->gem_prime_vmap)
> >>  		vaddr = obj->dev->driver->gem_prime_vmap(obj);
> >> +	else if (obj->funcs && obj->funcs->vmap_iomem)
> >> +		vaddr = NULL; /* requires mapping as I/O memory */
> >> +	else
> >> +		vaddr = ERR_PTR(-EOPNOTSUPP);
> >> +
> >> +	if (!vaddr)
> >> +		vaddr = ERR_PTR(-ENOMEM);
> >> +
> >> +	return vaddr;
> >> +}
> >> +
> >> +void __iomem *drm_gem_vmap_iomem(struct drm_gem_object *obj)
> >> +{
> >> +	void __iomem *vaddr;
> >> +
> >> +	if (obj->funcs && obj->funcs->vmap_iomem)
> >> +		vaddr = obj->funcs->vmap_iomem(obj);
> >> +	else if (obj->funcs && obj->funcs->vmap)
> >> +		vaddr = NULL; /* requires mapping as system memory */
> >>  	else
> >>  		vaddr = ERR_PTR(-EOPNOTSUPP);
> >>  
> >> diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
> >> index 8e01caaf95cc..aa1a3d4f9223 100644
> >> --- a/drivers/gpu/drm/drm_internal.h
> >> +++ b/drivers/gpu/drm/drm_internal.h
> >> @@ -187,6 +187,7 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
> >>  int drm_gem_pin(struct drm_gem_object *obj);
> >>  void drm_gem_unpin(struct drm_gem_object *obj);
> >>  void *drm_gem_vmap(struct drm_gem_object *obj);
> >> +void __iomem *drm_gem_vmap_iomem(struct drm_gem_object *obj);
> >>  void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
> >>  
> >>  /* drm_debugfs.c drm_debugfs_crc.c */
> >> diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
> >> index 7aaea665bfc2..94aa075ee4b6 100644
> >> --- a/include/drm/drm_client.h
> >> +++ b/include/drm/drm_client.h
> >> @@ -141,10 +141,15 @@ struct drm_client_buffer {
> >>  	struct drm_gem_object *gem;
> >>  
> >>  	/**
> >> -	 * @vaddr: Virtual address for the buffer
> >> +	 * @vaddr: Virtual address for the buffer in system memory
> >>  	 */
> >>  	void *vaddr;
> >>  
> >> +	/**
> >> +	 * @vaddr: Virtual address for the buffer in I/O memory
> >> +	 */
> >> +	void *vaddr_iomem;
> >> +
> >>  	/**
> >>  	 * @fb: DRM framebuffer
> >>  	 */
> >> @@ -156,6 +161,7 @@ drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 heig
> >>  void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
> >>  int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect);
> >>  void *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
> >> +void __iomem *drm_client_buffer_vmap_iomem(struct drm_client_buffer *buffer);
> >>  void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
> >>  
> >>  int drm_client_modeset_create(struct drm_client_dev *client);
> >> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> >> index 337a48321705..bc735ff522a8 100644
> >> --- a/include/drm/drm_gem.h
> >> +++ b/include/drm/drm_gem.h
> >> @@ -134,17 +134,28 @@ struct drm_gem_object_funcs {
> >>  	 * @vmap:
> >>  	 *
> >>  	 * Returns a virtual address for the buffer. Used by the
> >> -	 * drm_gem_dmabuf_vmap() helper.
> >> +	 * drm_gem_dmabuf_vmap() helper. If the buffer is not
> >> +	 * located in system memory, the function returns NULL.
> >>  	 *
> >>  	 * This callback is optional.
> >>  	 */
> >>  	void *(*vmap)(struct drm_gem_object *obj);
> >>  
> >> +	/**
> >> +	 * @vmap_iomem:
> >> +	 *
> >> +	 * Returns a virtual address for the buffer. If the buffer is not
> >> +	 * located in I/O memory, the function returns NULL.
> >> +	 *
> >> +	 * This callback is optional.
> >> +	 */
> >> +	void __iomem *(*vmap_iomem)(struct drm_gem_object *obj);
> >> +
> >>  	/**
> >>  	 * @vunmap:
> >>  	 *
> >> -	 * Releases the address previously returned by @vmap. Used by the
> >> -	 * drm_gem_dmabuf_vunmap() helper.
> >> +	 * Releases the address previously returned by @vmap or @vmap_iomem.
> >> +	 * Used by the drm_gem_dmabuf_vunmap() helper.
> >>  	 *
> >>  	 * This callback is optional.
> >>  	 */
> >> -- 
> >> 2.27.0
> >>
> > 
> 
> -- 
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
> 




-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux