Re: Explicit VM updates

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Bas has the problem that CS implicitly waits for VM updates.

Currently when you unmap a BO the operation will only be executed after all the previously made CS are finished.

Similar for mapping BOs. The next CS will only start after all the pending page table updates are completed.

The mapping case was already handled by my prototype patch set, but the unmapping case still hurts a bit.

This implicit sync between CS and map/unmap operations can really hurt the performance of applications which massively use PRTs.

Regards,
Christian.

Am 01.06.22 um 18:27 schrieb Marek Olšák:
Can you please summarize what this is about?

Thanks,
Marek

On Wed, Jun 1, 2022 at 8:40 AM Christian König <christian.koenig@xxxxxxx> wrote:
Hey guys,

so today Bas came up with a new requirement regarding the explicit
synchronization to VM updates and a bunch of prototype patches.

I've been thinking about that stuff for quite some time before, but to
be honest it's one of the most trickiest parts of the driver.

So my current thinking is that we could potentially handle those
requirements like this:

1. We add some new EXPLICIT flag to context (or CS?) and VM IOCTL. This
way we either get the new behavior for the whole CS+VM or the old one,
but never both mixed.

2. When memory is unmapped we keep around the last unmap operation
inside the bo_va.

3. When memory is freed we add all the CS fences which could access that
memory + the last unmap operation as BOOKKEEP fences to the BO and as
mandatory sync fence to the VM.

Memory freed either because of an eviction or because of userspace
closing the handle will be seen as a combination of unmap+free.


The result is the following semantic for userspace to avoid implicit
synchronization as much as possible:

1. When you allocate and map memory it is mandatory to either wait for
the mapping operation to complete or to add it as dependency for your CS.
     If this isn't followed the application will run into CS faults
(that's what we pretty much already implemented).

2. When memory is freed you must unmap that memory first and then wait
for this unmap operation to complete before freeing the memory.
     If this isn't followed the kernel will add a forcefully wait to the
next CS to block until the unmap is completed.

3. All VM operations requested by userspace will still be executed in
order, e.g. we can't run unmap + map in parallel or something like this.

Is that something you guys can live with? As far as I can see it should
give you the maximum freedom possible, but is still doable.

Regards,
Christian.


[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux