Re: [v2 PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On May 14, 2019, at 12:15 AM, Jan Stancek <jstancek@xxxxxxxxxx> wrote:
> 
> 
> ----- Original Message -----
>> On May 13, 2019 4:01 PM, Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx> wrote:
>> 
>> 
>> On 5/13/19 9:38 AM, Will Deacon wrote:
>>> On Fri, May 10, 2019 at 07:26:54AM +0800, Yang Shi wrote:
>>>> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
>>>> index 99740e1..469492d 100644
>>>> --- a/mm/mmu_gather.c
>>>> +++ b/mm/mmu_gather.c
>>>> @@ -245,14 +245,39 @@ void tlb_finish_mmu(struct mmu_gather *tlb,
>>>>  {
>>>>      /*
>>>>       * If there are parallel threads are doing PTE changes on same range
>>>> -     * under non-exclusive lock(e.g., mmap_sem read-side) but defer TLB
>>>> -     * flush by batching, a thread has stable TLB entry can fail to flush
>>>> -     * the TLB by observing pte_none|!pte_dirty, for example so flush TLB
>>>> -     * forcefully if we detect parallel PTE batching threads.
>>>> +     * under non-exclusive lock (e.g., mmap_sem read-side) but defer TLB
>>>> +     * flush by batching, one thread may end up seeing inconsistent PTEs
>>>> +     * and result in having stale TLB entries.  So flush TLB forcefully
>>>> +     * if we detect parallel PTE batching threads.
>>>> +     *
>>>> +     * However, some syscalls, e.g. munmap(), may free page tables, this
>>>> +     * needs force flush everything in the given range. Otherwise this
>>>> +     * may result in having stale TLB entries for some architectures,
>>>> +     * e.g. aarch64, that could specify flush what level TLB.
>>>>       */
>>>> -    if (mm_tlb_flush_nested(tlb->mm)) {
>>>> -            __tlb_reset_range(tlb);
>>>> -            __tlb_adjust_range(tlb, start, end - start);
>>>> +    if (mm_tlb_flush_nested(tlb->mm) && !tlb->fullmm) {
>>>> +            /*
>>>> +             * Since we can't tell what we actually should have
>>>> +             * flushed, flush everything in the given range.
>>>> +             */
>>>> +            tlb->freed_tables = 1;
>>>> +            tlb->cleared_ptes = 1;
>>>> +            tlb->cleared_pmds = 1;
>>>> +            tlb->cleared_puds = 1;
>>>> +            tlb->cleared_p4ds = 1;
>>>> +
>>>> +            /*
>>>> +             * Some architectures, e.g. ARM, that have range invalidation
>>>> +             * and care about VM_EXEC for I-Cache invalidation, need
>>>> force
>>>> +             * vma_exec set.
>>>> +             */
>>>> +            tlb->vma_exec = 1;
>>>> +
>>>> +            /* Force vma_huge clear to guarantee safer flush */
>>>> +            tlb->vma_huge = 0;
>>>> +
>>>> +            tlb->start = start;
>>>> +            tlb->end = end;
>>>>      }
>>> Whilst I think this is correct, it would be interesting to see whether
>>> or not it's actually faster than just nuking the whole mm, as I mentioned
>>> before.
>>> 
>>> At least in terms of getting a short-term fix, I'd prefer the diff below
>>> if it's not measurably worse.
>> 
>> I did a quick test with ebizzy (96 threads with 5 iterations) on my x86
>> VM, it shows slightly slowdown on records/s but much more sys time spent
>> with fullmm flush, the below is the data.
>> 
>>                                     nofullmm                 fullmm
>> ops (records/s)              225606                  225119
>> sys (s)                            0.69                        1.14
>> 
>> It looks the slight reduction of records/s is caused by the increase of
>> sys time.
>> 
>>> Will
>>> 
>>> --->8
>>> 
>>> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
>>> index 99740e1dd273..cc251422d307 100644
>>> --- a/mm/mmu_gather.c
>>> +++ b/mm/mmu_gather.c
>>> @@ -251,8 +251,9 @@ void tlb_finish_mmu(struct mmu_gather *tlb,
>>>        * forcefully if we detect parallel PTE batching threads.
>>>        */
>>>       if (mm_tlb_flush_nested(tlb->mm)) {
>>> +             tlb->fullmm = 1;
>>>               __tlb_reset_range(tlb);
>>> -             __tlb_adjust_range(tlb, start, end - start);
>>> +             tlb->freed_tables = 1;
>>>       }
>>> 
>>>       tlb_flush_mmu(tlb);
>> 
>> 
>> I think that this should have set need_flush_all and not fullmm.
> 
> Wouldn't that skip the flush?
> 
> If fulmm == 0, then __tlb_reset_range() sets tlb->end = 0.
>  tlb_flush_mmu
>    tlb_flush_mmu_tlbonly
>      if (!tlb->end)
>         return
> 
> Replacing fullmm with need_flush_all, brings the problem back / reproducer hangs.

Maybe setting need_flush_all does not have the right effect, but setting
fullmm and then calling __tlb_reset_range() when the PTEs were already
zapped seems strange.

fullmm is described as:

        /*
         * we are in the middle of an operation to clear
         * a full mm and can make some optimizations
         */

And this not the case.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux