Re: buggy/weird behavior in ttm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/11/2012 04:50 PM, Maarten Lankhorst wrote:
I was trying to clean ttm up a little so my changes would be less invasive, and simplify
the code for debuggability. During testing I noticed the following weirdnesses:
- ttm_mem_evict_first ignores no_wait_gpu if the buffer is on the ddestroy list.
   If you follow the code, it will effectively spin in ttm_mem_evict_first if a bo
   is on the list and no_wait_gpu is true.

Yes, you're right. This is a bug. At a first glance it looks like the code should
return unconditionally after kref_put().



I was working on a commit that removes fence_lock since I was killing off the
fence lock, but that requires some kind of defined behavior for this. Unless
we leave this in place as expected behavior..

I don't quite follow you? If you mean defined behavior for the fence lock it is that it protects the bo::sync_obj and bo::sync_obj_arg of *all* buffers. It was prevously one lock per buffer. The locking order is that it should be taken before the lru lock, but looking at the code it seems it could be quite simplified the other
way around...

Anyway, if you plan to remove the fence lock and protect it with reserve, you must make sure that a waiting reserve is never done in a destruction path. I think this
mostly concerns the nvidia driver.


- no_wait_reserve is ignored if no_wait_gpu is false
   ttm_bo_reserve_locked can only return true if no_wait_reserve is true, but
   subsequently it will do a wait_unreserved if no_wait_gpu is false.
I'm planning on removing this argument and act like it is always true, since
nothing on the lru list should fail to reserve currently.

Yes, since all buffers that are reserved are removed from the LRU list, there
should never be a waiting reserve on them, so no_wait_reserve can be removed
from ttm_mem_evict_first, ttm_bo_evict and possibly other functions in the call chain.


- effectively unlimited callchain between some functions that all go through
   ttm_mem_evict_first:

                                     /------------------------\
ttm_mem_evict_first - ttm_bo_evict -                          -ttm_bo_mem_space  - ttm_bo_mem_force_space - ttm_mem_evict_first
                                     \ ttm_bo_handle_move_mem /
I'm not surprised that there was a deadlock before, it seems to me it would
be pretty suicidal to ever do a blocking reserve on any of those lists,
lockdep would be all over you for this.

Well, at first this may look worse than it actually is. The driver's eviction memory order determines the recursion depth and typically it's 0 or 1, since subsequent ttm_mem_evict_first should never touch the same LRU lists as the first one. What would typically happen is that a BO is evicted from VRAM to TT, and if there is no space in TT, another BO is evicted to system memory, and the chain is terminated. However a driver could set up any eviction order but that would be
a BUG.

But in essence, as you say, even with a small recursion depth, a waiting reserve could cause a deadlock.
But there should be no waiting reserves in the eviction path currently.


Also it seems ttm_bo_move_ttm, ttm_bo_move_memcpy and ttm_bo_move_accel_cleanup
don't use some of their arguments, so could those be dropped?

Yes, but they are designed so that they could be plugged into the driver::move interface, so if you change the argument list you should do it on driver::move as well, and
make sure they all have the same argument list.

~Maarten

/Thomas

_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux