Re: [PATCH 00/13] mmu_notifier kill invalidate_page callback

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 31.08.2017 um 15:59 schrieb Jerome Glisse:
[Adding Intel folks as they might be interested in this discussion]

On Wed, Aug 30, 2017 at 05:51:52PM -0400, Felix Kuehling wrote:
Hi Jérôme,

I have some questions about the potential range-start-end race you
mentioned.

On 2017-08-29 07:54 PM, Jérôme Glisse wrote:
Note that a lot of existing user feels broken in respect to range_start/
range_end. Many user only have range_start() callback but there is nothing
preventing them to undo what was invalidated in their range_start() callback
after it returns but before any CPU page table update take place.

The code pattern use in kvm or umem odp is an example on how to properly
avoid such race. In a nutshell use some kind of sequence number and active
range invalidation counter to block anything that might undo what the
range_start() callback did.
What happens when we start monitoring an address range after
invaligate_range_start was called? Sounds like we have to keep track of
all active invalidations for just such a case, even in address ranges
that we don't currently care about.

What are the things we cannot do between invalidate_range_start and
invalidate_range_end? amdgpu calls get_user_pages to re-validate our
userptr mappings after the invalidate_range_start notifier invalidated
it. Do we have to wait for invalidate_range_end before we can call
get_user_pages safely?
Well the whole userptr bo object is somewhat broken from the start.
You never defined the semantic of it ie what is expected. I can
think of 2 differents semantics:
   A) a uptr buffer object is a snapshot of a memory at the time of
      uptr buffer object creation
   B) a uptr buffer object allow GPU to access a range of virtual
      address of a process an share coherent view of that range
      between CPU and GPU

As it was implemented it is more inline with B but it is not defined
anywhere AFAICT.

Well it is not documented, but the userspace APIs build on top of that require semantics B.

Essentially you could have cases where the GPU or the CPU is waiting in a busy loop for the other one to change some memory address.

Anyway getting back to your questions, it kind of doesn't matter as
you are using GUP ie you are pinning pages except for one scenario
(at least i can only think of one).

Problematic case is race between CPU write to zero page or COW and
GPU driver doing read only GUP:

     CPU thread 1                       | CPU thread 2
     ---------------------------------------------------------------------
                                        |
                                        | uptr covering addr A read only
                                        | .... do stuff with A
     write fault to addr A              |
     invalidate_range_start([A, A+1])   | unbind_ttm -> unpin
                                        | validate bo -> GUP -> zero page
     lock page table                    |
     replace zero pfn/COW with new page |
     unlock page table                  |
     invalidate_range_end([A, A+1])     |

So here the GPU would be using wrong page for the address. How bad
is it is undefined as the semantic of uptr is undefine. Given how it
as been use so far this race is unlikely (i don't think we have many
userspace that use that feature and do fork).


So i would first define the semantic of uptr bo and then i would fix
accordingly the code. Semantic A is easier to implement and you could
just drop the whole mmu_notifier. Maybe it is better to create uptr
buffer object everytime you want to snapshot a range of address. I
don't think the overhead of buffer creation would matter.

We do support creating userptr without mmu_notifier for exactly that purpose, e.g. uploads of snapshots what user space address space looked like in a certain moment.

Unfortunately we found that the overhead of buffer creation (and the related gup) is way to high to be useful as a throw away object. Just memcpy into a BO has just less latency over all.

And yeah, at least I'm perfectly aware of the problems with fork() and COW. BTW: It becomes really really ugly if you think about what happens when the parent writes to a page first and the GPU then has the child copy.

Regards,
Christian.

If you want to close the race for COW and zero page in case of read
only GUP there is no other way than what KVM or ODP is doing. I had
patchset to simplify all this but i need to bring it back to life.


Note that other thing might race but as you pin the pages they do
not matter. It just mean that if you GUP after range_start() but
before range_end() and before CPU page table update then you pinned
the same old page again and nothing will happen (migrate will fail,
MADV_FREE will nop, ...). So you just did the range_start() callback
for nothing in those cases.

(Sorry for taking so long to answer i forgot your mail yesterday with
  all the other discussion going on).

Cheers,
Jérôme


_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux