Re: [PATCH 1/4] drm/vmwgfx: Assign eviction priorities to resources

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/18/19 2:19 PM, Daniel Vetter wrote:
On Tue, Jun 18, 2019 at 11:54:08AM +0100, Emil Velikov wrote:
Hi Thomas,

On 2019/06/18, Thomas Hellström (VMware) wrote:
From: Thomas Hellstrom <thellstrom@xxxxxxxxxx>

TTM provides a means to assign eviction priorities to buffer object. This
means that all buffer objects with a lower priority will be evicted first
on memory pressure.
Use this to make sure surfaces and in particular non-dirty surfaces are
evicted first. Evicting in particular shaders, cotables and contexts imply
a significant performance hit on vmwgfx, so make sure these resources are
evicted last.
Some buffer objects are sub-allocated in user-space which means we can have
many resources attached to a single buffer object or resource. In that case
the buffer object is given the highest priority of the attached resources.

Signed-off-by: Thomas Hellstrom <thellstrom@xxxxxxxxxx>
Reviewed-by: Deepak Rawat <drawat@xxxxxxxxxx>
Fwiw patches 1-3 are:
Reviewed-by: Emil Velikov <emil.velikov@xxxxxxxxxxxxx>

Patch 4 is:
Acked-by: Emil Velikov <emil.velikov@xxxxxxxxxxxxx>

Huge thanks for sorting this out.
Oh, does this mean we can remove the varios master* callbacks from
drm_driver now? Iirc vmwgfx was the only user, and those callbacks seem
very tempting to various folks for implementing questionable driver hacks
... Happy to type the patches, but maybe simpler if you do that since all
this gets merged through the vmwgfx tree.

Cheers, Daniel

In case someone follow this, I'll paste in the commit message of 4/4 which is the relevant one here..

8<--------------------------------------------

At one point, the GPU command verifier and user-space handle manager
couldn't properly protect GPU clients from accessing each other's data.
Instead there was an elaborate mechanism to make sure only the active
master's primary clients could render. The other clients were either
put to sleep or even killed (if the master had exited). VRAM was
evicted on master switch. With the advent of render-node functionality,
we relaxed the VRAM eviction, but the other mechanisms stayed in place.

Now that the GPU  command verifier and ttm object manager properly
isolates primary clients from different master realms we can remove the
master switch related code and drop those legacy features.

8<-------------------------------------------

I think we can at least take a look. I'm out on a fairly long vacation soon so in any case it won't be before August or so.

One use we still have for master_set() is that if a master is switched away, and then the mode list changes, and then the master is switched back, it will typically not remember to act on the sysfs event received while switched out, and come back in an incorrect mode. Since mode-list changes happen quite frequently with virtual display adapters that's bad.

But perhaps we can consider moving that to core, if that's what needed to get rid of the master switch callbacks.

/Thomas



_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux