On 6/18/19 3:27 PM, Daniel Vetter wrote:
On Tue, Jun 18, 2019 at 03:08:01PM +0200, Thomas Hellstrom wrote:
On 6/18/19 2:19 PM, Daniel Vetter wrote:
On Tue, Jun 18, 2019 at 11:54:08AM +0100, Emil Velikov wrote:
Hi Thomas,
On 2019/06/18, Thomas Hellström (VMware) wrote:
From: Thomas Hellstrom <thellstrom@xxxxxxxxxx>
TTM provides a means to assign eviction priorities to buffer object. This
means that all buffer objects with a lower priority will be evicted first
on memory pressure.
Use this to make sure surfaces and in particular non-dirty surfaces are
evicted first. Evicting in particular shaders, cotables and contexts imply
a significant performance hit on vmwgfx, so make sure these resources are
evicted last.
Some buffer objects are sub-allocated in user-space which means we can have
many resources attached to a single buffer object or resource. In that case
the buffer object is given the highest priority of the attached resources.
Signed-off-by: Thomas Hellstrom <thellstrom@xxxxxxxxxx>
Reviewed-by: Deepak Rawat <drawat@xxxxxxxxxx>
Fwiw patches 1-3 are:
Reviewed-by: Emil Velikov <emil.velikov@xxxxxxxxxxxxx>
Patch 4 is:
Acked-by: Emil Velikov <emil.velikov@xxxxxxxxxxxxx>
Huge thanks for sorting this out.
Oh, does this mean we can remove the varios master* callbacks from
drm_driver now? Iirc vmwgfx was the only user, and those callbacks seem
very tempting to various folks for implementing questionable driver hacks
... Happy to type the patches, but maybe simpler if you do that since all
this gets merged through the vmwgfx tree.
Cheers, Daniel
In case someone follow this, I'll paste in the commit message of 4/4 which
is the relevant one here..
8<--------------------------------------------
At one point, the GPU command verifier and user-space handle manager
couldn't properly protect GPU clients from accessing each other's data.
Instead there was an elaborate mechanism to make sure only the active
master's primary clients could render. The other clients were either
put to sleep or even killed (if the master had exited). VRAM was
evicted on master switch. With the advent of render-node functionality,
we relaxed the VRAM eviction, but the other mechanisms stayed in place.
Now that the GPU command verifier and ttm object manager properly
isolates primary clients from different master realms we can remove the
master switch related code and drop those legacy features.
8<-------------------------------------------
I think we can at least take a look. I'm out on a fairly long vacation soon
so in any case it won't be before August or so.
Ah don't worry, if this all lands in the 5.3 merge window I can take a
look in a few weeks.
One use we still have for master_set() is that if a master is switched away,
and then the mode list changes, and then the master is switched back, it
will typically not remember to act on the sysfs event received while
switched out, and come back in an incorrect mode. Since mode-list changes
happen quite frequently with virtual display adapters that's bad.
But perhaps we can consider moving that to core, if that's what needed to
get rid of the master switch callbacks.
Hm, this sounds a bit like papering over userspace bugs, at least if
you're referring to drm_sysfs_hotplug_event(). Userspace is supposed to
either keep listening or to re-acquire all the kms output state and do the
hotplugg processing in one go when becoming active again.
Ofc it exists, so we can't just remove it. I wouldn't want to make this
part of the uapi though, feels like duct-taping around sloppy userspace.
Maybe we could work on a gradual plan to deprecate this, with limiting it
only to older vmwgfx versions as a start?
Sounds ok with me. First I guess I need to figure out what compositors /
user-space drivers actually suffer from this. If there are many, it
might be a pain trying to fix them all.
Thanks,
Thomas
These kind of tiny but important differences in how drivers implement kms
is why I'd much, much prefer it's not even possible to do stuff like this.
Thanks, Daniel
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel