Re: [PATCH 6/6] drm/i915: obey wbinvd threshold in more places

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 09, 2015 at 01:54:19PM -0800, Ben Widawsky wrote:
> Signed-off-by: Ben Widawsky <ben@xxxxxxxxxxxx>
> ---
>  drivers/gpu/drm/i915/i915_drv.h     |  4 ++++
>  drivers/gpu/drm/i915/i915_gem.c     | 32 ++++++++++++++++++++++++++++----
>  drivers/gpu/drm/i915/i915_gem_gtt.c | 13 ++++++++++---
>  3 files changed, 42 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 5d2f62d..dfecdfd 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -2818,6 +2818,10 @@ static inline bool cpu_cache_is_coherent(struct drm_device *dev,
>  {
>  	return HAS_LLC(dev) || level != I915_CACHE_NONE;
>  }
> +static inline bool i915_gem_obj_should_clflush(struct drm_i915_gem_object *obj)
> +{
> +	return obj->base.size >= to_i915(obj->base.dev)->wbinvd_threshold;
> +}

if (i915_gem_obj_should_clflush(obj)) wbinvd()?

Does wbinvd always have the same characteristic threshold, even coupled
with a second access (read or write) inside the TLB flushing of
kunmap_atomic. I would imagine that these workloads are dramatically
different to the replacement in execbuffer.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/intel-gfx





[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux