Re: [lkp-robot] [mm] 7674270022: will-it-scale.per_process_ops -19.3% regression

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Minchan Kim <minchan@xxxxxxxxxx> wrote:

> On Wed, Aug 09, 2017 at 10:59:02AM +0800, Ye Xiaolong wrote:
>> On 08/08, Minchan Kim wrote:
>>> On Mon, Aug 07, 2017 at 10:51:00PM -0700, Nadav Amit wrote:
>>>> Nadav Amit <nadav.amit@xxxxxxxxx> wrote:
>>>> 
>>>>> Minchan Kim <minchan@xxxxxxxxxx> wrote:
>>>>> 
>>>>>> Hi,
>>>>>> 
>>>>>> On Tue, Aug 08, 2017 at 09:19:23AM +0800, kernel test robot wrote:
>>>>>>> Greeting,
>>>>>>> 
>>>>>>> FYI, we noticed a -19.3% regression of will-it-scale.per_process_ops due to commit:
>>>>>>> 
>>>>>>> 
>>>>>>> commit: 76742700225cad9df49f05399381ac3f1ec3dc60 ("mm: fix MADV_[FREE|DONTNEED] TLB flush miss problem")
>>>>>>> url: https://github.com/0day-ci/linux/commits/Nadav-Amit/mm-migrate-prevent-racy-access-to-tlb_flush_pending/20170802-205715
>>>>>>> 
>>>>>>> 
>>>>>>> in testcase: will-it-scale
>>>>>>> on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
>>>>>>> with following parameters:
>>>>>>> 
>>>>>>> 	nr_task: 16
>>>>>>> 	mode: process
>>>>>>> 	test: brk1
>>>>>>> 	cpufreq_governor: performance
>>>>>>> 
>>>>>>> test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
>>>>>>> test-url: https://github.com/antonblanchard/will-it-scale
>>>>>> 
>>>>>> Thanks for the report.
>>>>>> Could you explain what kinds of workload you are testing?
>>>>>> 
>>>>>> Does it calls frequently madvise(MADV_DONTNEED) in parallel on multiple
>>>>>> threads?
>>>>> 
>>>>> According to the description it is "testcase:brk increase/decrease of one
>>>>> page”. According to the mode it spawns multiple processes, not threads.
>>>>> 
>>>>> Since a single page is unmapped each time, and the iTLB-loads increase
>>>>> dramatically, I would suspect that for some reason a full TLB flush is
>>>>> caused during do_munmap().
>>>>> 
>>>>> If I find some free time, I’ll try to profile the workload - but feel free
>>>>> to beat me to it.
>>>> 
>>>> The root-cause appears to be that tlb_finish_mmu() does not call
>>>> dec_tlb_flush_pending() - as it should. Any chance you can take care of it?
>>> 
>>> Oops, but with second looking, it seems it's not my fault. ;-)
>>> https://marc.info/?l=linux-mm&m=150156699114088&w=2
>>> 
>>> Anyway, thanks for the pointing out.
>>> xiaolong.ye, could you retest with this fix?
>> 
>> I've queued tests for 5 times and results show this patch (e8f682574e4 "mm:
>> decrease tlb flush pending count in tlb_finish_mmu") does help recover the
>> performance back.
>> 
>> 378005bdbac0a2ec  76742700225cad9df49f053993  e8f682574e45b6406dadfffeb4  
>> ----------------  --------------------------  --------------------------  
>>         %stddev      change         %stddev      change         %stddev
>>             \          |                \          |                \  
>>   3405093             -19%    2747088              -2%    3348752        will-it-scale.per_process_ops
>>      1280 ±  3%        -2%       1257 ±  3%        -6%       1207        vmstat.system.cs
>>      2702 ± 18%        11%       3002 ± 19%        17%       3156 ± 18%  numa-vmstat.node0.nr_mapped
>>     10765 ± 18%        11%      11964 ± 19%        17%      12588 ± 18%  numa-meminfo.node0.Mapped
>>      0.00 ± 47%       -40%       0.00 ± 45%       -84%       0.00 ± 42%  mpstat.cpu.soft%
>> 
>> Thanks,
>> Xiaolong
> 
> Thanks for the testing!

Sorry again for screwing your patch, Minchan.






[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux