Re: [Nouveau] [PATCH drm-next 05/14] drm/nouveau: new VM_BIND uapi interfaces

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





Am 07.02.23 um 11:50 schrieb Danilo Krummrich:
On 2/7/23 10:35, Christian König wrote:
[SNIP]

Just to have it all in place, the example I gave was:
 - two virtually contiguous buffers A and B
 - binding 1 mapped to A with BO offset 0
 - binding 2 mapped to B with BO offset length(A)

What I did not mention both A and B aren't sparse buffers in this example, although it probably doesn't matter too much.

Since the conditions to do so are given, we merge binding 1 and binding 2 right at the time when binding 2 is requested. To do so a driver might unmap binding 1 for a very short period of time (e.g. to (re-)map the freshly merged binding with a different page size if possible).

Nope, that's not correct handling.

I agree, and that's exactly what I'm trying to say. However, I start noticing that this is not correct if it happens within the same buffer as well.

Yes, exactly that's my point.




From userspace perspective buffer A is ready to use before applying binding 2 to buffer B, hence it would be illegal to touch binding 1 again when userspace asks the kernel to map binding 2 to buffer B.

Besides that I think there is no point in merging between buffers anyway because we'd end up splitting such a merged mapping anyway later on when one of the two buffers is destroyed.

Also, I think the same applies to sparse buffers as well, a mapping within A isn't expected to be re-mapped just because something is mapped to B.

However, in this context I start wondering if re-mapping in the context of merge and split is allowed at all, even within the same sparse buffer (and even with a separate page table for sparse mappings as described in my last mail; shaders would never fault).

See, your assumption is that userspace/applications don't modify the VA space intentionally while the GPU is accessing it is just bluntly speaking incorrect.


I don't assume that. The opposite is the case. My assumption is that it's always OK for userspace to intentionally modify the VA space.

However, I also assumed that if userspace asks for e.g. a new mapping within a certain buffer it is OK for the kernel to apply further changes (e.g. re-organize PTs to split or merge) to the VA space of which userspace isn't aware of. At least as long as they happen within the bounds of this particular buffer, but not for other buffers.

Well when this somehow affects shaders which accesses other parts of the buffer at the same time then that won't work.

I think the reasoning I had in mind was that I thought if userspace asks for any modification of a given portion of the VA space (that is a VKBuffer) userspace must assume that until this modification (e.g. re-organization of PTs) is complete reading 0s intermediately may happen. This seems to be clearly wrong.

When you have a VA address which is mapped to buffer A and accessed by some GPU shaders it is perfectly valid for the application to say "map it again to the same buffer A".

It is also perfectly valid for an application to re-map this region to a different buffer B, it's just not defined when the access then transits from A to B. (AFAIK this is currently worked on in a new specification).

So when your page table updates result in the shader to intermediately get 0s in return, because you change the underlying mapping you simply have some implementation bug in Nouveau.

Luckily that's not the case (anymore).


I don't know how Nvidia hw handles this, and yes it's quite complicated on AMD hw as well because our TLBs are not really made for this use case, but I'm 100% sure that this is possible since it is still part of some of the specifications (mostly Vulkan I think).

To sum it up as far as I can see by giving the regions to the kernel is not something you would want for Nouveau either.

If, as it turns out, it's also not allowed to do what I described above within the same VKBuffer, I agree the bounds aren't needed for merging.

However, I still don't see why we would want to merge over buffer boundaries, because ultimately we'll end up splitting such a merged mapping later on anyway once one of the buffers is destroyed.

Well the key point is all approaches have some pros and cons.

If we merge and decide to only do that inside certain boundaries then those boundaries needs to be provided and checked against. This burns quite some CPU cycles

If we just merge what we can we might have extra page table updates which cost time and could result in undesired side effects.

If we don't merge at all we have additional housekeeping for the mappings and maybe hw restrictions.

Also, as explained in one of the previous mails in nouveau we can have separate PTs for sparse mappings with large page sizes and separate PTs for memory backed mappings with smaller page sizes overlaying them. Hence, I need to track a single sparse mapping per buffer spanning the whole buffer (which I do with a region) and the actual memory backed mappings within the same range.

Now, this might or might not be unique for Nvidia hardware. If nouveau would be the only potential user, plus we don't care about potentially merging mappings over buffer boundaries and hence producing foreseeable splits of those merged mappings, we could get rid of regions entirely.

This sounds similar to what AMD hw used to have up until gfx8 (I think), basically sparse resources where defined through a separate mechanism to the address resolution of the page tables. I won't rule out that other hardware has similar approaches.

On the other hand when you have separate page tables for address translation and sparse handling then why not instantiate two separate VM manager instances for them?

Regards,
Christian.



Regards,
Christian.




So you need to be able to handle this case anyway and the approach with the regions won't help you at all preventing that.

Regards,
Christian.








[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux