On Wed, Mar 10, 2021 at 1:08 PM Shakeel Butt <shakeelb@xxxxxxxxxx> wrote: > > On Wed, Mar 10, 2021 at 10:54 AM Yang Shi <shy828301@xxxxxxxxx> wrote: > > > > On Wed, Mar 10, 2021 at 10:24 AM Shakeel Butt <shakeelb@xxxxxxxxxx> wrote: > > > > > > On Wed, Mar 10, 2021 at 9:46 AM Yang Shi <shy828301@xxxxxxxxx> wrote: > > > > > > > > The number of deferred objects might get windup to an absurd number, and it > > > > results in clamp of slab objects. It is undesirable for sustaining workingset. > > > > > > > > So shrink deferred objects proportional to priority and cap nr_deferred to twice > > > > of cache items. > > > > > > > > The idea is borrowed from Dave Chinner's patch: > > > > https://lore.kernel.org/linux-xfs/20191031234618.15403-13-david@xxxxxxxxxxxxx/ > > > > > > > > Tested with kernel build and vfs metadata heavy workload in our production > > > > environment, no regression is spotted so far. > > > > > > Did you run both of these workloads in the same cgroup or separate cgroups? > > > > Both are covered. > > > > Have you tried just this patch i.e. without the first 12 patches? No. It could be applied without the first 12 patches, but I didn't test this combination specifically since I don't think it would have any difference from with the first 12 patches. I tested running the test case under root memcg, it seems equal to w/o the first 12 patches and the only difference is where to get nr_deferred.