Re: [PATCH 0/5] Add process_memwatch syscall

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



David Hildenbrand <david@xxxxxxxxxx> writes:

> On 26.07.22 18:18, Muhammad Usama Anjum wrote:
>> Hello,
>
> Hi,
>
>> 
>> This patch series implements a new syscall, process_memwatch. Currently,
>> only the support to watch soft-dirty PTE bit is added. This syscall is
>> generic to watch the memory of the process. There is enough room to add
>> more operations like this to watch memory in the future.
>> 
>> Soft-dirty PTE bit of the memory pages can be viewed by using pagemap
>> procfs file. The soft-dirty PTE bit for the memory in a process can be
>> cleared by writing to the clear_refs file. This series adds features that
>> weren't possible through the Proc FS interface.
>> - There is no atomic get soft-dirty PTE bit status and clear operation
>>   possible.
>
> Such an interface might be easy to add, no?
>
>> - The soft-dirty PTE bit of only a part of memory cannot be cleared.
>
> Same.
>
> So I'm curious why we need a new syscall for that.

Hi David,

Yes, sure. Though it has to be through an ioctl since we need both input
and output semantics at the same call to keep the atomic semantics.

I answered Peter Enderborg about our concerns when turning this into an
ioctl.  But they are possible to overcome.

>> project. The Proc FS interface is enough for that as I think the process
>> is frozen. We have the use case where we need to track the soft-dirty
>> PTE bit for running processes. We need this tracking and clear mechanism
>> of a region of memory while the process is running to emulate the
>> getWriteWatch() syscall of Windows. This syscall is used by games to keep
>> track of dirty pages and keep processing only the dirty pages. This
>> syscall can be used by the CRIU project and other applications which
>> require soft-dirty PTE bit information.
>> 
>> As in the current kernel there is no way to clear a part of memory (instead
>> of clearing the Soft-Dirty bits for the entire processi) and get+clear
>> operation cannot be performed atomically, there are other methods to mimic
>> this information entirely in userspace with poor performance:
>> - The mprotect syscall and SIGSEGV handler for bookkeeping
>> - The userfaultfd syscall with the handler for bookkeeping
>
> You write "poor performance". Did you actually implement a prototype
> using userfaultfd-wp? Can you share numbers for comparison?

Yes, we did.  I think Usama can share some numbers.

The problem with userfaultfd, as far as I understand, is that it will
require a second userspace process to be called in order to handle the
annotation that a page was touched, before remapping the page to make it
accessible to the originating process, every time a page is touched.
This context switch is prohibitively expensive to our use case, where
Windows applications might invoke it quite often.  Soft-dirty bit
instead, allows the page tracking to be done entirely in kernelspace.

If I understand correctly, userfaultfd is usefull for VM/container
migration, where the cost of the context switch is not a real concern,
since there are much bigger costs from the migration itself.

Maybe we're missing some feature about userfaultfd that would allow us
to avoid the cost, but from our observations we didn't find a way to
overcome it.

>>         long process_memwatch(int pidfd, unsigned long start, int len,
>>                               unsigned int flags, void *vec, int vec_len);
>> 
>> This syscall can be used by the CRIU project and other applications which
>> require soft-dirty PTE bit information. The following operations are
>> supported in this syscall:
>> - Get the pages that are soft-dirty.
>> - Clear the pages which are soft-dirty.
>> - The optional flag to ignore the VM_SOFTDIRTY and only track per page
>> soft-dirty PTE bit
>
> Huh, why? VM_SOFTDIRTY is an internal implementation detail and should
> remain such.
> VM_SOFTDIRTY translates to "all pages in this VMA are soft-dirty".

That is something very specific about our use case, and we should
explain it a bit better.  The problem is that VM_SOFTDIRTY modifications
introduce the overhead of the mm write lock acquisition, which is very
visible in our benchmarks of Windows games running over Wine.

Since the main reason for VM_SOFTDIRTY to exist, as far as we understand
it, is to track vma remapping, and this is a use case we don't need to
worry about when implementing windows semantics, we'd like to be able to
avoid this extra overhead, optionally, iff userspace knows it can be
done safely.

VM_SOFTDIRTY is indeed an internal interface.  Which is why we are
proposing to expose the feature in terms of tracking VMA reuse.

Thanks,

-- 
Gabriel Krisman Bertazi



[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux