Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On May 13, 2019, at 1:36 AM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> 
> On Thu, May 09, 2019 at 09:21:35PM +0000, Nadav Amit wrote:
> 
>>>>> And we can fix that by having tlb_finish_mmu() sync up. Never let a
>>>>> concurrent tlb_finish_mmu() complete until all concurrenct mmu_gathers
>>>>> have completed.
>>>>> 
>>>>> This should not be too hard to make happen.
>>>> 
>>>> This synchronization sounds much more expensive than what I proposed. But I
>>>> agree that cache-lines that move from one CPU to another might become an
>>>> issue. But I think that the scheme I suggested would minimize this overhead.
>>> 
>>> Well, it would have a lot more unconditional atomic ops. My scheme only
>>> waits when there is actual concurrency.
>> 
>> Well, something has to give. I didn’t think that if the same core does the
>> atomic op it would be too expensive.
> 
> They're still at least 20 cycles a pop, uncontended.
> 
>>> I _think_ something like the below ought to work, but its not even been
>>> near a compiler. The only problem is the unconditional wakeup; we can
>>> play games to avoid that if we want to continue with this.
>>> 
>>> Ideally we'd only do this when there's been actual overlap, but I've not
>>> found a sensible way to detect that.
>>> 
>>> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
>>> index 4ef4bbe78a1d..b70e35792d29 100644
>>> --- a/include/linux/mm_types.h
>>> +++ b/include/linux/mm_types.h
>>> @@ -590,7 +590,12 @@ static inline void dec_tlb_flush_pending(struct mm_struct *mm)
>>> 	 *
>>> 	 * Therefore we must rely on tlb_flush_*() to guarantee order.
>>> 	 */
>>> -	atomic_dec(&mm->tlb_flush_pending);
>>> +	if (atomic_dec_and_test(&mm->tlb_flush_pending)) {
>>> +		wake_up_var(&mm->tlb_flush_pending);
>>> +	} else {
>>> +		wait_event_var(&mm->tlb_flush_pending,
>>> +			       !atomic_read_acquire(&mm->tlb_flush_pending));
>>> +	}
>>> }
>> 
>> It still seems very expensive to me, at least for certain workloads (e.g.,
>> Apache with multithreaded MPM).
> 
> Is that Apache-MPM workload triggering this lots? Having a known
> benchmark for this stuff is good for when someone has time to play with
> things.

Setting Apache2 with mpm_worker causes every request to go through
mmap-writev-munmap flow on every thread. I didn’t run this workload after
the patches that downgrade the mmap_sem to read before the page-table
zapping were introduced. I presume these patches would allow the page-table
zapping to be done concurrently, and therefore would hit this flow.

>> It may be possible to avoid false-positive nesting indications (when the
>> flushes do not overlap) by creating a new struct mmu_gather_pending, with
>> something like:
>> 
>>  struct mmu_gather_pending {
>> 	u64 start;
>> 	u64 end;
>> 	struct mmu_gather_pending *next;
>>  }
>> 
>> tlb_finish_mmu() would then iterate over the mm->mmu_gather_pending
>> (pointing to the linked list) and find whether there is any overlap. This
>> would still require synchronization (acquiring a lock when allocating and
>> deallocating or something fancier).
> 
> We have an interval_tree for this, and yes, that's how far I got :/
> 
> The other thing I was thinking of is trying to detect overlap through
> the page-tables themselves, but we have a distinct lack of storage
> there.

I tried to think about saving some generation info somewhere in the
page-struct, but I could not come up with a reasonable solution that
would not requite to traverse all the page tables again one the TLB
flush is done.

> The things is, if this threaded monster runs on all CPUs (busy front end
> server) and does a ton of invalidation due to all the short lived
> request crud, then all the extra invalidations will add up too. Having
> to do process (machine in this case) wide invalidations is expensive,
> having to do more of them surely isn't cheap either.
> 
> So there might be something to win here.

Yes. I remember that these full TLB flushes leave their mark.

BTW: sometimes you don’t see the effect of these full TLB flushes as much in
VMs. I encountered a strange phenomenon at the time - INVLPG for an
arbitrary page cause my Haswell machine flush the entire TLB, when the
INVLPG was issued inside a VM. It took me quite some time to analyze this
problem. Eventually Intel told me that’s part of what is called “page
fracturing” - if the host uses 4k pages in the EPT, they (usually) need to
flush the entire TLB for any INVLPG. That’s happens since they don’t know
the size of the flushed page.

I really need to finish my blog-post about it some day.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux