Re: [PATCH 2/4] drm/xe: store bind time pat index to xe_bo

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 23/01/2024 08:05, Ville Syrjälä wrote:
On Fri, Jan 19, 2024 at 03:45:22PM +0000, Matthew Auld wrote:
On 18/01/2024 15:27, Juha-Pekka Heikkila wrote:
Store pat index from xe_vma to xe_bo

Signed-off-by: Juha-Pekka Heikkila <juhapekka.heikkila@xxxxxxxxx>
---
   drivers/gpu/drm/xe/xe_pt.c | 4 ++++
   1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index de1030a47588..4b76db698878 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -1252,6 +1252,10 @@ __xe_pt_bind_vma(struct xe_tile *tile, struct xe_vma *vma, struct xe_exec_queue
   		return ERR_PTR(-ENOMEM);
   	}
+ if (xe_vma_bo(vma)) {
+		xe_vma_bo(vma)->pat_index = vma->pat_index;

Multiple mappings will trash this I think. Is that OK for your usecase?
It can be useful to map the same resource as compressed and uncompressed
to facilitate in-place decompression/compression.

I thought the pat_index is set for the entire bo? The
cache_level->pat_index stuff doesn't really work otherwise
I don't think (assuming it works at all).

AFAIK it is mostly like that in i915 because it doesn't have a vm_bind interface. With Xe we have vm_bind. The pat_index is a property of the ppGTT binding and therefore vma. There seem to be legitimate reasons to map the same resource with different pat_index, like with compressed/uncompressed. See BSpec: 58797 "double map (alias) surfaces".


So dunno why this is doing anything using vmas. I think
what we probably need is to check/set the bo pat_index
at fb create time, and lock it into place (if there's
some mechanism by which a random userspace client could
change it after the fact, and thus screw up everything).

Maybe we can seal the pat_index on first bind or something if the BO underneath is marked with XE_BO_SCANOUT?



Also would be good to be clear about what happens if the KMD doesn't do
anything to prevent compression with non-tile4? Is it just a bit of
display corruption or something much worse that we need to prevent? Is
this just a best effort check to help userspace? Otherwise it is hard to
evaluate how solid we need to be here in our checking to prevent this
scenario. For example how is binding vs display races handled? What
happens if the bind appears after the display check?

+	}
+
   	fence = xe_migrate_update_pgtables(tile->migrate,
   					   vm, xe_vma_bo(vma), q,
   					   entries, num_entries,




[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux