Re: [v9 PATCH 13/13] mm: vmscan: shrink deferred objects proportional to priority

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 10, 2021 at 1:08 PM Shakeel Butt <shakeelb@xxxxxxxxxx> wrote:
>
> On Wed, Mar 10, 2021 at 10:54 AM Yang Shi <shy828301@xxxxxxxxx> wrote:
> >
> > On Wed, Mar 10, 2021 at 10:24 AM Shakeel Butt <shakeelb@xxxxxxxxxx> wrote:
> > >
> > > On Wed, Mar 10, 2021 at 9:46 AM Yang Shi <shy828301@xxxxxxxxx> wrote:
> > > >
> > > > The number of deferred objects might get windup to an absurd number, and it
> > > > results in clamp of slab objects.  It is undesirable for sustaining workingset.
> > > >
> > > > So shrink deferred objects proportional to priority and cap nr_deferred to twice
> > > > of cache items.
> > > >
> > > > The idea is borrowed from Dave Chinner's patch:
> > > > https://lore.kernel.org/linux-xfs/20191031234618.15403-13-david@xxxxxxxxxxxxx/
> > > >
> > > > Tested with kernel build and vfs metadata heavy workload in our production
> > > > environment, no regression is spotted so far.
> > >
> > > Did you run both of these workloads in the same cgroup or separate cgroups?
> >
> > Both are covered.
> >
>
> Have you tried just this patch i.e. without the first 12 patches?

No. It could be applied without the first 12 patches, but I didn't
test this combination specifically since I don't think it would have
any difference from with the first 12 patches. I tested running the
test case under root memcg, it seems equal to w/o the first 12 patches
and the only difference is where to get nr_deferred.



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux