Re: [PATCH v2] mm: Add configuration to control whether vmpressure notifier is enabled

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri 20-08-21 23:20:40, yong w wrote:
> Michal Hocko <mhocko@xxxxxxxx> 于2021年8月20日周五 下午7:26写道:
> >
> > On Thu 19-08-21 16:53:39, yongw.pur@xxxxxxxxx wrote:
> > > From: wangyong <wang.yong@xxxxxxxxxx>
> > >
> > > Inspired by PSI features, vmpressure inotifier function should
> > > also be configured to decide whether it is used, because it is an
> > > independent feature which notifies the user of memory pressure.
> >
> > Yes, it is an independent feature indeed but what is the actual reason
> > to put a more configuration space here. Config options are not free both
> > from the user experience POV as well as the code maintenance. Why do we
> > need to disable this feature. Who can benefit from such a setup?
> >
> > > So we add configuration to control whether vmpressure notifier is
> > > enabled, and provide a boot parameter to use vmpressure notifier
> > > flexibly.
> >
> > Flexibility is nice but not free as mentioned above.
> >
> > > Use Christoph Lamenter’s pagefault tool
> > > (https://lkml.org/lkml/2006/8/29/294) for comparative testing.
> > > Test with 5.14.0-rc5-next-20210813 on x86_64 4G Ram
> > > To ensure that the vmpressure function is executed, we enable zram
> > > and let the program occupy memory so that some memory is swapped out
> > >
> > > unpatched:
> > > Gb    Rep     Thr     CLine   User(s) System(s) Wall(s) flt/cpu/s     fault/wsec
> > > 2     1       1       1       0.1     0.97    1.13    485490.062      463533.34
> > > 2     1       1       1       0.11    0.96    1.12    483086.072      465309.495
> > > 2     1       1       1       0.1     0.95    1.11    496687.098      469887.643
> > > 2     1       1       1       0.09    0.97    1.11    489711.434      468402.102
> > > 2     1       1       1       0.13    0.94    1.12    484159.415      466080.941
> > > average                               0.106   0.958   1.118   487826.8162     466642.7042
> > >
> > > patched and CONFIG_MEMCG_VMPRESSURE is not set:
> > > Gb    Rep     Thr     CLine   User(s) System(s) Wall(s) flt/cpu/s     fault/wsec
> > > 2     1       1       1       0.1     0.96    1.1     490942.682      473125.98
> > > 2     1       1       1       0.08    0.99    1.13    484987.521      463161.975
> > > 2     1       1       1       0.09    0.96    1.09    498824.98       476696.066
> > > 2     1       1       1       0.1     0.97    1.12    484127.673      465951.238
> > > 2     1       1       1       0.1     0.97    1.11    487032          468964.662
> > > average                               0.094   0.97    1.11    489182.9712     469579.9842
> > >
> > > According to flt/cpu/s, performance improved by 0.2% which is not obvious.
> >
> > I haven't checked how are those numbers calculated but from a very brief
> > look it seems like the variation between different runs is higher than
> > 0.2%. Have you checked the average against standard deviation to get a
> > better idea whether the difference is really outside of the noise?
> > --
> > Michal Hocko
> > SUSE Labs
> 
> Thanks for your reply.
> The reason for adding configuration is as follows:

All those reasons should be a part of the changelog.

> 1. Referring to [PATCH] psi: make disabling/enabling easier for vendor
> kernels, the modification
> is also applicable to vmpressure.
> 
> 2. With the introduction of psi into the kernel, there are two memory
> pressure monitoring methods,
> it is not necessary to use both and it makes sense to make vmpressure
> configurable.

I am not sure these are sufficient justifications but that is something
to discuss. And hence it should be a part of the changelog.

> 3. In the case where the user does not need vmpressure,  vmpressure
> calculation is additional overhead.

You should quantify that and argue why that overhead cannot be further
reduced without config/boot time knobs.

> In some special scenes with tight memory, vmpressure will be executed
> frequently.we use "likely" and "inline"
> to improve the performance of the kernel, why not reduce some
> unnecessary calculations?

I am all for improving the code. Is it possible to do it by other means?
E.g. reduce a potential overhead when there no events registered?
-- 
Michal Hocko
SUSE Labs





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux