Re: [PATCH v1 0/4] Enable >0 order folio memory compaction

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 21/11/2023 16:45, Zi Yan wrote:
> On 21 Nov 2023, at 10:46, Ryan Roberts wrote:
> 
>>>
>>> vm-scalability results
>>> ===
>>>
>>> =========================================================================================
>>> compiler/kconfig/rootfs/runtime/tbox_group/test/testcase:
>>>   gcc-13/defconfig/debian/300s/qemu-vm/mmap-xread-seq-mt/vm-scalability
>>>
>>> commit:
>>>   6.6.0-rc4-mm-everything-2023-10-21-02-40+
>>>   6.6.0-rc4-split-folio-in-compaction+
>>>   6.6.0-rc4-folio-migration-in-compaction+
>>>   6.6.0-rc4-folio-migration-free-page-split+
>>>   6.6.0-rc4-folio-migration-free-page-split-sort-src+
>>>
>>> 6.6.0-rc4-mm-eve 6.6.0-rc4-split-folio-in-co 6.6.0-rc4-folio-migration-i 6.6.0-rc4-folio-migration-f 6.6.0-rc4-folio-migration-f
>>> ---------------- --------------------------- --------------------------- --------------------------- ---------------------------
>>>          %stddev     %change         %stddev     %change         %stddev     %change         %stddev     %change         %stddev
>>>              \          |                \          |                \          |                \          |                \
>>>   12896955            +2.7%   13249322            -4.0%   12385175 ±  5%      +1.1%   13033951            -0.4%   12845698        vm-scalability.throughput
>>
>> Hi Zi,
>>
>> Are you able to add any commentary to these results as I'm struggling to
>> interpret them; Is a positive or negative change better (are they times or
>> rates?). What are the stddev values? The title suggests percent but the values
>> are huge - I'm trying to understand what the error bars look like - are the
>> swings real or noise?
> 
> The metric is vm-scalability.throughput, so the larger the better. Some %stddev
> are not present since they are too small. For 6.6.0-rc4-folio-migration-in-compaction+,
> %stddev is greater than %change, so the change might be noise.

Ahh got it - thanks!

> 
> Also, I talked to DavidH in last THP Cabal meeting about this. He suggested that
> there are a lot of noise in vm-scalability like what I have here and I should
> run more iterations and on bare metal. I am currently rerun them on a baremetal
> and more iterations on the existing VM and report the results later. Please
> note that the runs really take some time.

Ahh ok, I'll wait for the bare metal numbers and will disregard these for now.
Thanks!

> 
> In addition, I will find other fragmentation-related benchmarks, so we can see
> the impact on memory fragmentation.
> 
> --
> Best Regards,
> Yan, Zi





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux