On 07/01/2015 10:25 AM, ankitprasad.r.sharma@xxxxxxxxx wrote:
From: Ankitprasad Sharma <ankitprasad.r.sharma@xxxxxxxxx> This patch adds support for extending the pread/pwrite functionality for objects not backed by shmem. The access will be made through gtt interface. This will cover prime objects as well as stolen memory backed objects but for userptr objects it is still forbidden. v2: drop locks around slow_user_access, prefault the pages before access (Chris) v3: Rebased to the latest drm-intel-nightly (Ankit) testcase: igt/gem_stolen
>
Signed-off-by: Ankitprasad Sharma <ankitprasad.r.sharma@xxxxxxxxx> --- drivers/gpu/drm/i915/i915_gem.c | 137 +++++++++++++++++++++++++++++++++++----- 1 file changed, 120 insertions(+), 17 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 4acf331..4be6eb4 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -629,6 +629,102 @@ shmem_pread_slow(struct page *page, int shmem_page_offset, int page_length, return ret ? - EFAULT : 0; } +static inline int +slow_user_access(struct io_mapping *mapping, + loff_t page_base, int page_offset, + char __user *user_data, + int length, bool write) +{ + void __iomem *vaddr_inatomic; + void *vaddr; + unsigned long unwritten; + + vaddr_inatomic = io_mapping_map_wc(mapping, page_base); + /* We can use the cpu mem copy function because this is X86. */ + vaddr = (void __force *)vaddr_inatomic + page_offset; + if (write) + unwritten = __copy_from_user(vaddr, user_data, length); + else + unwritten = __copy_to_user(user_data, vaddr, length); + + io_mapping_unmap(vaddr_inatomic); + return unwritten; +}
I am not super familiar with low level mapping business. But it looks correct to me. Just one question would be if there are any downsides to WC mapping? If in the read case it would be any advantage not to ask for WC?
+static int +i915_gem_gtt_pread_pwrite(struct drm_device *dev, + struct drm_i915_gem_object *obj, uint64_t size, + uint64_t data_offset, uint64_t data_ptr, bool write) +{ + struct drm_i915_private *dev_priv = dev->dev_private; + char __user *user_data; + ssize_t remain; + loff_t offset, page_base; + int page_offset, page_length, ret = 0; + + ret = i915_gem_obj_ggtt_pin(obj, 0, PIN_MAPPABLE); + if (ret) + goto out; + + ret = i915_gem_object_set_to_gtt_domain(obj, write); + if (ret) + goto out_unpin; + + ret = i915_gem_object_put_fence(obj); + if (ret) + goto out_unpin; + + user_data = to_user_ptr(data_ptr); + remain = size;
Strictly speaking uint64_t can overflow ssize_t, compiler does not care in this case?
+ + offset = i915_gem_obj_ggtt_offset(obj) + data_offset; + + if (write) + intel_fb_obj_invalidate(obj, ORIGIN_GTT); + + mutex_unlock(&dev->struct_mutex); + if (!write && likely(!i915.prefault_disable)) + ret = fault_in_multipages_writeable(user_data, remain);
A bit confusing read/write inversion. :) But correct. Just wondering if it would make sense to invert the boolean at least in slow_user_access. Or in fact just call it pwrite instead of write to reflect pwrite == true is _reading_ from user memory.
+ while (remain > 0) { + /* Operation in this page + * + * page_base = page offset within aperture + * page_offset = offset within page + * page_length = bytes to copy for this page + */ + page_base = offset & PAGE_MASK; + page_offset = offset_in_page(offset); + page_length = remain; + if ((page_offset + remain) > PAGE_SIZE) + page_length = PAGE_SIZE - page_offset;
It would save some arithmetics and branching to pull out the first potentially unaligned copy out of the loop and then page_offset would be zero for 2nd page onwards.
+ /* This is a slow read/write as it tries to read from + * and write to user memory which may result into page + * faults + */ + ret = slow_user_access(dev_priv->gtt.mappable, page_base, + page_offset, user_data, + page_length, write); + + if (ret) { + ret = -EINVAL; + break; + } + + remain -= page_length; + user_data += page_length; + offset += page_length; + } + + mutex_lock(&dev->struct_mutex);
Caller had it interruptible.
+ +out_unpin: + i915_gem_object_ggtt_unpin(obj); +out: + return ret; +} + static int i915_gem_shmem_pread(struct drm_device *dev, struct drm_i915_gem_object *obj, @@ -752,17 +848,19 @@ i915_gem_pread_ioctl(struct drm_device *dev, void *data, goto out; } - /* prime objects have no backing filp to GEM pread/pwrite - * pages from. - */ - if (!obj->base.filp) { - ret = -EINVAL; - goto out; - } - trace_i915_gem_object_pread(obj, args->offset, args->size); - ret = i915_gem_shmem_pread(dev, obj, args, file); + /* pread for non shmem backed objects */ + if (!obj->base.filp) { + if (obj->tiling_mode == I915_TILING_NONE) + ret = i915_gem_gtt_pread_pwrite(dev, obj, args->size, + args->offset, + args->data_ptr, + false); + else + ret = -EINVAL; + } else + ret = i915_gem_shmem_pread(dev, obj, args, file);
Same coding style disagreement on whether or not both blocks should have braces.
Regards, Tvrtko _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx