Re: [PATCH 2/2] mm/mprotect: do not flush on permission promotion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On Oct 7, 2021, at 10:07 AM, David Hildenbrand <david@xxxxxxxxxx> wrote:
> 
> On 07.10.21 18:16, Nadav Amit wrote:
>>> On Oct 7, 2021, at 5:13 AM, David Hildenbrand <david@xxxxxxxxxx> wrote:
>>> 
>>> On 25.09.21 22:54, Nadav Amit wrote:
>>>> From: Nadav Amit <namit@xxxxxxxxxx>
>>>> Currently, using mprotect() to unprotect a memory region or uffd to
>>>> unprotect a memory region causes a TLB flush. At least on x86, as
>>>> protection is promoted, no TLB flush is needed.
>>>> Add an arch-specific pte_may_need_flush() which tells whether a TLB
>>>> flush is needed based on the old PTE and the new one. Implement an x86
>>>> pte_may_need_flush().
>>>> For x86, PTE protection promotion or changes of software bits does
>>>> require a flush, also add logic that considers the dirty-bit. Changes to
>>>> the access-bit do not trigger a TLB flush, although architecturally they
>>>> should, as Linux considers the access-bit as a hint.
>>> 
>>> Is the added LOC worth the benefit? IOW, do we have some benchmark that really benefits from that?
>> So you ask whether the added ~10 LOC (net) worth the benefit?
> 
> I read  "3 files changed, 46 insertions(+), 1 deletion(-)" to optimize something without proof, so I naturally have to ask. So this is just a "usually we optimize and show numbers to proof" comment.

These numbers are misleading, as they count comments and other non-code.

[snip]

> 
> Any numbers would be helpful.
> 
>> If you want, I will write a microbenchmarks and give you numbers.
>> If you look for further optimizations (although you did not indicate
>> so), such as doing the TLB batching from do_mprotect_key(),
>> (i.e. batching across VMAs), we can discuss it and apply it on
>> top of these patches.
> 
> I think this patch itself is sufficient if we can show a benefit; I do wonder if existing benchmarks could already show a benefit, I feel like they should if this makes a difference. Excessive mprotect() usage (protect<>unprotect) isn't something unusual.

I do not know about a concrete benchmark (other than my workload, which I cannot share right now) that does excessive mprotect() in a way that would actually be measurable on the overall performance. I would argue that many many optimizations in the kernel are such that would not have so measurable benefit by themselves on common macrobenchmarks.

Anyhow, per your request I created a small micro-benchmark that runs mprotect(PROT_READ) and mprotect(PROT_READ|PROT_WRITE) in a loop and measured the time it took to do the latter (where a writeprotect is not needed). I ran the benchmark on a VM (guest) on top of KVM.

The cost (cycles) per mprotect(PROT_READ|PROT_WRITE) operation:

		1 thread		2 threads
		--------		---------
w/patch:	2496			2505			
w/o patch:	5342			10458

[ The results for 1 thread might seem strange as one can expect the overhead in this case to be no more than ~250 cycles, which is the time a TLB invalidation of a single PTE takes. Yet, this overhead are probably related to “page fracturing”, which happens when the VM memory is backed by 4KB pages. In such scenarios, a single PTE invalidation in the VM can cause on Intel a full TLB flush. The full flush is needed to ensure that if the invalidated address was mapped through huge page in the VM, any relevant 4KB mapping that is cached in the TLB (after fracturing due to the 4KB GPA->HPA mapping) would be removed.]

Let me know if you want me to share the micro-benchmark with you. I am not going to mention the results in the commit log, because I think the overhead of unnecessary TLB invalidation is well established.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux