Re: [PATCH v4 00/14] Introduce Copy-On-Write to Page Table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 14, 2023 at 10:58:30AM +0100, David Hildenbrand wrote:
> On 10.02.23 18:20, Chih-En Lin wrote:
> > On Fri, Feb 10, 2023 at 11:21:16AM -0500, Pasha Tatashin wrote:
> > > > > > Currently, copy-on-write is only used for the mapped memory; the child
> > > > > > process still needs to copy the entire page table from the parent
> > > > > > process during forking. The parent process might take a lot of time and
> > > > > > memory to copy the page table when the parent has a big page table
> > > > > > allocated. For example, the memory usage of a process after forking with
> > > > > > 1 GB mapped memory is as follows:
> > > > > 
> > > > > For some reason, I was not able to reproduce performance improvements
> > > > > with a simple fork() performance measurement program. The results that
> > > > > I saw are the following:
> > > > > 
> > > > > Base:
> > > > > Fork latency per gigabyte: 0.004416 seconds
> > > > > Fork latency per gigabyte: 0.004382 seconds
> > > > > Fork latency per gigabyte: 0.004442 seconds
> > > > > COW kernel:
> > > > > Fork latency per gigabyte: 0.004524 seconds
> > > > > Fork latency per gigabyte: 0.004764 seconds
> > > > > Fork latency per gigabyte: 0.004547 seconds
> > > > > 
> > > > > AMD EPYC 7B12 64-Core Processor
> > > > > Base:
> > > > > Fork latency per gigabyte: 0.003923 seconds
> > > > > Fork latency per gigabyte: 0.003909 seconds
> > > > > Fork latency per gigabyte: 0.003955 seconds
> > > > > COW kernel:
> > > > > Fork latency per gigabyte: 0.004221 seconds
> > > > > Fork latency per gigabyte: 0.003882 seconds
> > > > > Fork latency per gigabyte: 0.003854 seconds
> > > > > 
> > > > > Given, that page table for child is not copied, I was expecting the
> > > > > performance to be better with COW kernel, and also not to depend on
> > > > > the size of the parent.
> > > > 
> > > > Yes, the child won't duplicate the page table, but fork will still
> > > > traverse all the page table entries to do the accounting.
> > > > And, since this patch expends the COW to the PTE table level, it's not
> > > > the mapped page (page table entry) grained anymore, so we have to
> > > > guarantee that all the mapped page is available to do COW mapping in
> > > > the such page table.
> > > > This kind of checking also costs some time.
> > > > As a result, since the accounting and the checking, the COW PTE fork
> > > > still depends on the size of the parent so the improvement might not
> > > > be significant.
> > > 
> > > The current version of the series does not provide any performance
> > > improvements for fork(). I would recommend removing claims from the
> > > cover letter about better fork() performance, as this may be
> > > misleading for those looking for a way to speed up forking. In my
> > 
> >  From v3 to v4, I changed the implementation of the COW fork() part to do
> > the accounting and checking. At the time, I also removed most of the
> > descriptions about the better fork() performance. Maybe it's not enough
> > and still has some misleading. I will fix this in the next version.
> > Thanks.
> > 
> > > case, I was looking to speed up Redis OSS, which relies on fork() to
> > > create consistent snapshots for driving replicates/backups. The O(N)
> > > per-page operation causes fork() to be slow, so I was hoping that this
> > > series, which does not duplicate the VA during fork(), would make the
> > > operation much quicker.
> > 
> > Indeed, at first, I tried to avoid the O(N) per-page operation by
> > deferring the accounting and the swap stuff to the page fault. But,
> > as I mentioned, it's not suitable for the mainline.
> > 
> > Honestly, for improving the fork(), I have an idea to skip the per-page
> > operation without breaking the logic. However, this will introduce the
> > complicated mechanism and may has the overhead for other features. It
> > might not be worth it. It's hard to strike a balance between the
> > over-complicated mechanism with (probably) better performance and data
> > consistency with the page status. So, I would focus on the safety and
> > stable approach at first.
> 
> Yes, it is most probably possible, but complexity, robustness and
> maintainability have to be considered as well.
> 
> Thanks for implementing this approach (only deduplication without other
> optimizations) and evaluating it accordingly. It's certainly "cleaner", such
> that we only have to mess with unsharing and not with other
> accounting/pinning/mapcount thingies. But it also highlights how intrusive
> even this basic deduplication approach already is -- and that most benefits
> of the original approach requires even more complexity on top.
> 
> I am not quite sure if the benefit is worth the price (I am not to decide
> and I would like to hear other options).

I'm looking at the discussion of page table sharing in 2002 [1]. 
It looks like in 2002 ~ 2006, there also have some patches try to
improve fork().

After that, I also saw one thread which is about another shared page
table patch's benchmark. I can't find the original patch though [2].
But, I found the probably same patch in 2005 [3], it also mentioned
the previous benchmark discussion:

"
For those familiar with the shared page table patch I did a couple of years
ago, this patch does not implement copy-on-write page tables for private
mappings.  Analysis showed the cost and complexity far outweighed any
potential benefit.
"

However, it might be different right now. For example, the implemetation
. We have split page table lock now, so we don't have to consider the
page_table_share_lock thing. Also, presently, we have different use
cases (shells [2] v.s. VM cloning and fuzzing) to consider.

Nonetheless, I still think the discussion can provide some of the mind
to us.

BTW, It seems like the 2002 patch [1] is different from the 2002 [2]
and 2005 [3].

[1] https://lkml.iu.edu/hypermail/linux/kernel/0202.2/0102.html
[2] https://lore.kernel.org/linux-mm/3E02FACD.5B300794@xxxxxxxxx/
[3] https://lore.kernel.org/linux-mm/7C49DFF721CB4E671DB260F9@%5B10.1.1.4%5D/T/#u

> My quick thoughts after skimming over the core parts of this series
> 
> (1) forgetting to break COW on a PTE in some pgtable walker feels quite
>     likely (meaning that it might be fairly error-prone) and forgetting
>     to break COW on a PTE table, accidentally modifying the shared
>     table.

Maybe I should also handle arch/ and others parts.
I will keep looking at where I missed.

> (2) break_cow_pte() can fail, which means that we can fail some
>     operations (possibly silently halfway through) now. For example,
>     looking at your change_pte_range() change, I suspect it's wrong.

Maybe I should add WARN_ON() and skip the failed COW PTE.

> (3) handle_cow_pte_fault() looks quite complicated and needs quite some
>     double-checking: we temporarily clear the PMD, to reset it
>     afterwards. I am not sure if that is correct. For example, what
>     stops another page fault stumbling over that pmd_none() and
>     allocating an empty page table? Maybe there are some locking details
>     missing or they are very subtle such that we better document them. I
>    recall that THP played quite some tricks to make such cases work ...

I think that holding mmap_write_lock may be enough (I added
mmap_assert_write_locked() in the fault function btw). But, I might
be wrong. I will look at the THP stuff to see how they work. Thanks.

Thanks for the review.

> > 
> > > > Actually, at the RFC v1 and v2, we proposed the version of skipping
> > > > those works, and we got a significant improvement. You can see the
> > > > number from RFC v2 cover letter [1]:
> > > > "In short, with 512 MB mapped memory, COW PTE decreases latency by 93%
> > > > for normal fork"
> > > 
> > > I suspect the 93% improvement (when the mapcount was not updated) was
> > > only for VAs with 4K pages. With 2M mappings this series did not
> > > provide any benefit is this correct?
> > 
> > Yes. In this case, the COW PTE performance is similar to the normal
> > fork().
> 
> 
> The thing with THP is, that during fork(), we always allocate a backup PTE
> table, to be able to PTE-map the THP whenever we have to. Otherwise we'd
> have to eventually fail some operations we don't want to fail -- similar to
> the case where break_cow_pte() could fail now due to -ENOMEM although we
> really don't want to fail (e.g., change_pte_range() ).
> 
> I always considered that wasteful, because in many scenarios, we'll never
> ever split a THP and possibly waste memory.
> 
> Optimizing that for THP (e.g., don't always allocate backup THP, have some
> global allocation backup pool for splits + refill when close-to-empty) might
> provide similar fork() improvements, both in speed and memory consumption
> when it comes to anonymous memory.

When collapsing huge pages, do/can they reuse those PTEs for backup?
So, we don't have to allocate the PTE or maintain the pool.

Thanks,
Chih-En Lin




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux