Re: [PATCH 3/6] drm/i915: Split the batch pool by engine

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 18, 2015 at 04:51:58PM +0000, Tvrtko Ursulin wrote:
> 
> On 03/09/2015 09:55 AM, Chris Wilson wrote:
> >I woke up one morning and found 50k objects sitting in the batch pool
> >and every search seemed to iterate the entire list... Painting the
> >screen in oils would provide a more fluid display.
> >
> >One issue with the current design is that we only check for retirements
> >on the current ring when preparing to submit a new batch. This means
> >that we can have thousands of "active" batches on another ring that we
> >have to walk over. The simplest way to avoid that is to split the pools
> >per ring and then our LRU execution ordering will also ensure that the
> >inactive buffers remain at the front.
> 
> Is the retire work handler not keeping up? Regularly or under some
> specific circumstances?

The retire_work handler that runs at ~1Hz? Yes, it fails to keep up.
Which is why we explicitly run retire_ring before each execbuffer.

> >diff --git a/drivers/gpu/drm/i915/i915_gem_batch_pool.c b/drivers/gpu/drm/i915/i915_gem_batch_pool.c
> >index 21f3356cc0ab..1287abf55b84 100644
> >--- a/drivers/gpu/drm/i915/i915_gem_batch_pool.c
> >+++ b/drivers/gpu/drm/i915/i915_gem_batch_pool.c
> >@@ -96,8 +96,9 @@ i915_gem_batch_pool_get(struct i915_gem_batch_pool *pool,
> >
> >  	list_for_each_entry_safe(tmp, next,
> >  				 &pool->cache_list, batch_pool_list) {
> >+		/* The batches are strictly LRU ordered */
> >  		if (tmp->active)
> >-			continue;
> >+			break;
> 
> This hunk would be a better fit for 2/6, correct?

No. The explanation is given by the comment + changelog.

> >diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> >index fcb074bd55dc..a6d4d5502188 100644
> >--- a/drivers/gpu/drm/i915/intel_lrc.c
> >+++ b/drivers/gpu/drm/i915/intel_lrc.c
> >@@ -1421,6 +1421,7 @@ static int logical_ring_init(struct drm_device *dev, struct intel_engine_cs *rin
> >  	ring->dev = dev;
> >  	INIT_LIST_HEAD(&ring->active_list);
> >  	INIT_LIST_HEAD(&ring->request_list);
> >+	i915_gem_batch_pool_init(dev, &ring->batch_pool);
> >  	init_waitqueue_head(&ring->irq_queue);
> >
> >  	INIT_LIST_HEAD(&ring->execlist_queue);
> 
> 
> Cleanup part for _lrc ?

Probably. Becomes redundant later anyway.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/intel-gfx





[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux