Re: [PATCH v4 00/14] Introduce Copy-On-Write to Page Table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 14.02.23 14:07, Pasha Tatashin wrote:
On Tue, Feb 14, 2023 at 4:58 AM David Hildenbrand <david@xxxxxxxxxx> wrote:

On 10.02.23 18:20, Chih-En Lin wrote:
On Fri, Feb 10, 2023 at 11:21:16AM -0500, Pasha Tatashin wrote:
Currently, copy-on-write is only used for the mapped memory; the child
process still needs to copy the entire page table from the parent
process during forking. The parent process might take a lot of time and
memory to copy the page table when the parent has a big page table
allocated. For example, the memory usage of a process after forking with
1 GB mapped memory is as follows:

For some reason, I was not able to reproduce performance improvements
with a simple fork() performance measurement program. The results that
I saw are the following:

Base:
Fork latency per gigabyte: 0.004416 seconds
Fork latency per gigabyte: 0.004382 seconds
Fork latency per gigabyte: 0.004442 seconds
COW kernel:
Fork latency per gigabyte: 0.004524 seconds
Fork latency per gigabyte: 0.004764 seconds
Fork latency per gigabyte: 0.004547 seconds

AMD EPYC 7B12 64-Core Processor
Base:
Fork latency per gigabyte: 0.003923 seconds
Fork latency per gigabyte: 0.003909 seconds
Fork latency per gigabyte: 0.003955 seconds
COW kernel:
Fork latency per gigabyte: 0.004221 seconds
Fork latency per gigabyte: 0.003882 seconds
Fork latency per gigabyte: 0.003854 seconds

Given, that page table for child is not copied, I was expecting the
performance to be better with COW kernel, and also not to depend on
the size of the parent.

Yes, the child won't duplicate the page table, but fork will still
traverse all the page table entries to do the accounting.
And, since this patch expends the COW to the PTE table level, it's not
the mapped page (page table entry) grained anymore, so we have to
guarantee that all the mapped page is available to do COW mapping in
the such page table.
This kind of checking also costs some time.
As a result, since the accounting and the checking, the COW PTE fork
still depends on the size of the parent so the improvement might not
be significant.

The current version of the series does not provide any performance
improvements for fork(). I would recommend removing claims from the
cover letter about better fork() performance, as this may be
misleading for those looking for a way to speed up forking. In my

  From v3 to v4, I changed the implementation of the COW fork() part to do
the accounting and checking. At the time, I also removed most of the
descriptions about the better fork() performance. Maybe it's not enough
and still has some misleading. I will fix this in the next version.
Thanks.

case, I was looking to speed up Redis OSS, which relies on fork() to
create consistent snapshots for driving replicates/backups. The O(N)
per-page operation causes fork() to be slow, so I was hoping that this
series, which does not duplicate the VA during fork(), would make the
operation much quicker.

Indeed, at first, I tried to avoid the O(N) per-page operation by
deferring the accounting and the swap stuff to the page fault. But,
as I mentioned, it's not suitable for the mainline.

Honestly, for improving the fork(), I have an idea to skip the per-page
operation without breaking the logic. However, this will introduce the
complicated mechanism and may has the overhead for other features. It
might not be worth it. It's hard to strike a balance between the
over-complicated mechanism with (probably) better performance and data
consistency with the page status. So, I would focus on the safety and
stable approach at first.

Yes, it is most probably possible, but complexity, robustness and
maintainability have to be considered as well.

Thanks for implementing this approach (only deduplication without other
optimizations) and evaluating it accordingly. It's certainly "cleaner",
such that we only have to mess with unsharing and not with other
accounting/pinning/mapcount thingies. But it also highlights how
intrusive even this basic deduplication approach already is -- and that
most benefits of the original approach requires even more complexity on top.

I am not quite sure if the benefit is worth the price (I am not to
decide and I would like to hear other options).

My quick thoughts after skimming over the core parts of this series

(1) forgetting to break COW on a PTE in some pgtable walker feels quite
      likely (meaning that it might be fairly error-prone) and forgetting
      to break COW on a PTE table, accidentally modifying the shared
      table.
(2) break_cow_pte() can fail, which means that we can fail some
      operations (possibly silently halfway through) now. For example,
      looking at your change_pte_range() change, I suspect it's wrong.
(3) handle_cow_pte_fault() looks quite complicated and needs quite some
      double-checking: we temporarily clear the PMD, to reset it
      afterwards. I am not sure if that is correct. For example, what
      stops another page fault stumbling over that pmd_none() and
      allocating an empty page table? Maybe there are some locking details
      missing or they are very subtle such that we better document them. I
     recall that THP played quite some tricks to make such cases work ...


Actually, at the RFC v1 and v2, we proposed the version of skipping
those works, and we got a significant improvement. You can see the
number from RFC v2 cover letter [1]:
"In short, with 512 MB mapped memory, COW PTE decreases latency by 93%
for normal fork"

I suspect the 93% improvement (when the mapcount was not updated) was
only for VAs with 4K pages. With 2M mappings this series did not
provide any benefit is this correct?

Yes. In this case, the COW PTE performance is similar to the normal
fork().


The thing with THP is, that during fork(), we always allocate a backup
PTE table, to be able to PTE-map the THP whenever we have to. Otherwise
we'd have to eventually fail some operations we don't want to fail --
similar to the case where break_cow_pte() could fail now due to -ENOMEM
although we really don't want to fail (e.g., change_pte_range() ).

I always considered that wasteful, because in many scenarios, we'll
never ever split a THP and possibly waste memory.

Yes, it does sound wasteful for a pretty rare corner case that
combines splitting THP in a process, and not having enough memory to
allocate PTE page tables.

Optimizing that for THP (e.g., don't always allocate backup THP, have
some global allocation backup pool for splits + refill when
close-to-empty) might provide similar fork() improvements, both in speed
and memory consumption when it comes to anonymous memory.

This sounds like a reasonable way to optimize the fork performance for
processes with large RSS, which in most cases would have 2M THP
mappings. When you say global pool, do you mean per machine, per
cgroup, or per process?

Good question. I recall that the problem is that we sometimes need a new pgtable when splitting a THP, but

(a) we might be under spinlock and cannot sleep. We need an atomic
    allocation that might fail. For this, a pool might be helpful.

(b) we might actually be out of memory.

My gut feeling is that a global pool would be sufficient, only yo be used when we run into (a) or (b) to be able to make progress in these rare cases.

Something that would be interesting to evaluate is which THP split operations might might require some way to recover when really OOM.

For example, __split_huge_pmd() is currently not able to report a failure. I assume that we could sleep in there. And if we're not able to allocate any memory in there (with sleeping), maybe the process should be zapped either way by the OOM killer.


--
Thanks,

David / dhildenb




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux