Re: [PATCH] drm/ttm: remove fence_lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 18, 2012 at 06:43:40PM +0200, Thomas Hellstrom wrote:
> On 10/18/2012 04:45 PM, Maarten Lankhorst wrote:
> >Op 18-10-12 13:55, Thomas Hellstrom schreef:
> >>On 10/18/2012 01:38 PM, Maarten Lankhorst wrote:
> >>>Op 18-10-12 13:02, Thomas Hellstrom schreef:
> >>>>On 10/18/2012 10:37 AM, Maarten Lankhorst wrote:
> >>>>>Hey,
> >>>>>
> >>>>>Op 18-10-12 09:59, Thomas Hellstrom schreef:
> >>>>>>On 10/18/2012 09:28 AM, Thomas Hellstrom wrote:
> >>>>>>>Hi, Maarten,
> >>>>>>>
> >>>>>>>As you know I have been having my doubts about this change.
> >>>>>>>To me it seems insane to be forced to read the fence pointer under a
> >>>>>>>reserved lock, simply because when you take the reserve lock, another
> >>>>>>>process may have it and there is a substantial chance that that process
> >>>>>>>will also be waiting for idle while holding the reserve lock.
> >>>>>I think it makes perfect sense, the only times you want to read the fence
> >>>>>is when you want to change the members protected by the reservation.
> >>>>No, that's not true. A typical case (or the only case)
> >>>>is where you want to do a map with no_wait semantics. You will want
> >>>>to be able to access a buffer's results even if the eviction code
> >>>>is about to schedule an unbind from the GPU, and have the buffer
> >>>>reserved?
> >>>Well either block on reserve or return -EBUSY if reserved, presumably the
> >>>former..
> >>>
> >>>ttm_bo_vm_fault does the latter already, anyway
> >>ttm_bo_vm_fault only trylocks to avoid a deadlock with mmap_sem, it's really
> >>a waiting reserve but for different reasons. Typically a user-space app will keep
> >>asynchronous maps to TTM during a buffer lifetime, and the buffer lifetime may
> >>be long as user-space apps keep caches.
> >>That's not the same as accessing a buffer after the GPU is done with it.
> >Ah indeed.
> >>>You don't need to hold the reservation while performing the wait itself though,
> >>>you could check if ttm_bo_wait(no_wait_gpu = true) returns -EBUSY, and if so
> >>>take a ref to the sync_obj member and then wait after unreserving. You won't
> >>>reset sync_obj member to NULL in that case, but that should be harmless.
> >>>This will allow you to keep the reservations fast and short. Maybe a helper
> >>>would be appropriate for this since radeon and nouveau both seem to do this.
> >>>
> >>The problem is that as long as other users are waiting for idle with reservation
> >>held, for example to switch GPU engine or to delete a GPU bind, waiting
> >>for reserve will in many case mean wait for GPU.
> >This example sounds inefficient, I know nouveau can do this, but this essentially
> >just stalls the gpu entirely. I think guess you mean functions like nouveau_gem_object_close?
> >It wouldn't surprise me if performance in nouveau could be improved simply by
> >fixing those cases up instead, since it stalls the application completely too for other uses.
> >
> Please see the Nouveau cpu_prep patch as well.
> 
> While there are a number of cases that can be fixed up, also in
> Radeon, there's no way around that reservation
> is a heavyweight lock that, particularly on simpler hardware without
> support for fence ordering
> with barriers and / or "semaphores" and accelerated eviction will be
> held while waiting for idle.
> 
> As such, it is unsuitable to protect read access to the fence
> pointer. If the intention is to keep a single fence pointer
> I think the best solution is to allow reading the fence pointer
> outside reservation, but make sure this can be done
> atomically. If the intention is to protect an array or list of fence
> pointers, I think a spinlock is the better solution.
> 
> /Thomas

Just wanted to point out that like Thomas i am concern about having to
have object reserved when waiting on its associated fence. I fear it
will hurt us somehow.

I will try to spend couple days stress testing your branch on radeon
trying to see if it hurts performance anyhow with current use case.

Cheers,
Jerome
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux