Re: [RFC PATCH] mm: have kswapd only reclaiming use min protection on memcg

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Oct 27, 2021 at 7:52 PM Michal Hocko <mhocko@xxxxxxxx> wrote:
>
> On Wed 27-10-21 17:19:56, Zhaoyang Huang wrote:
> > On Wed, Oct 27, 2021 at 4:26 PM Michal Hocko <mhocko@xxxxxxxx> wrote:
> > >
> > > On Wed 27-10-21 15:46:19, Zhaoyang Huang wrote:
> > > > On Wed, Oct 27, 2021 at 3:20 PM Michal Hocko <mhocko@xxxxxxxx> wrote:
> > > > >
> > > > > On Wed 27-10-21 15:01:50, Huangzhaoyang wrote:
> > > > > > From: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx>
> > > > > >
> > > > > > For the kswapd only reclaiming, there is no chance to try again on
> > > > > > this group while direct reclaim has. fix it by judging gfp flag.
> > > > >
> > > > > There is no problem description (same as in your last submissions. Have
> > > > > you looked at the patch submission documentation as recommended
> > > > > previously?).
> > > > >
> > > > > Also this patch doesn't make any sense. Both direct reclaim and kswapd
> > > > > use a gfp mask which contains __GFP_DIRECT_RECLAIM (see balance_pgdat
> > > > > for the kswapd part)..
> > > > ok, but how does the reclaiming try with memcg's min protection on the
> > > > alloc without __GFP_DIRECT_RECLAIM?
> > >
> > > I do not follow. There is no need to protect memcg if the allocation
> > > request doesn't have __GFP_DIRECT_RECLAIM because that would fail the
> > > charge if a hard limit is reached, see try_charge_memcg and
> > > gfpflags_allow_blocking check.
> > >
> > > Background reclaim, on the other hand never breaches reclaim protection.
> > >
> > > What is the actual problem you want to solve?
> > Imagine there is an allocation with gfp_mask & ~GFP_DIRECT_RECLAIM and
> > all processes are under cgroups. Kswapd is the only hope here which
> > however has a low efficiency of get_scan_count. I would like to have
> > kswapd work as direct reclaim in 2nd round which will have
> > protection=memory.min.
>
> Do you have an example where this would be a practical problem? Atomic
> allocations should be rather rare.
Please find below for the search result of '~__GFP_DIRECT_RECLAIM'
which shows some drivers and net prefer to behave like that.
Furthermore, the allocations are always together with high order.

block/bio.c:464: gfp_mask &= ~__GFP_DIRECT_RECLAIM;
drivers/vhost/net.c:668: pfrag->page = alloc_pages((gfp &
~__GFP_DIRECT_RECLAIM) |
drivers/net/ethernet/mellanox/mlx4/icm.c:184: mask &= ~__GFP_DIRECT_RECLAIM;
fs/erofs/zdata.c:243: gfp_t gfp = (mapping_gfp_mask(mc) &
~__GFP_DIRECT_RECLAIM) |
fs/fscache/page.c:138: gfp &= ~__GFP_DIRECT_RECLAIM;
fs/fscache/cookie.c:187: INIT_RADIX_TREE(&cookie->stores, GFP_NOFS &
~__GFP_DIRECT_RECLAIM);
fs/btrfs/disk-io.c:2928: INIT_RADIX_TREE(&fs_info->reada_tree,
GFP_NOFS & ~__GFP_DIRECT_RECLAIM);
fs/btrfs/volumes.c:6868: INIT_RADIX_TREE(&dev->reada_zones, GFP_NOFS &
~__GFP_DIRECT_RECLAIM);
fs/btrfs/volumes.c:6869: INIT_RADIX_TREE(&dev->reada_extents, GFP_NOFS
& ~__GFP_DIRECT_RECLAIM);
kernel/cgroup/cgroup.c:325: ret = idr_alloc(idr, ptr, start, end,
gfp_mask & ~__GFP_DIRECT_RECLAIM);
mm/mempool.c:389: gfp_temp = gfp_mask & ~(__GFP_DIRECT_RECLAIM|__GFP_IO);
mm/hugetlb.c:2165: gfp &=  ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL);
mm/mempolicy.c:2061: preferred_gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL);
mm/memcontrol.c:5452: ret = try_charge(mc.to, GFP_KERNEL &
~__GFP_DIRECT_RECLAIM, count);
net/core/sock.c:2623: pfrag->page = alloc_pages((gfp & ~__GFP_DIRECT_RECLAIM) |
net/core/skbuff.c:6084: page = alloc_pages((gfp_mask & ~__GFP_DIRECT_RECLAIM) |
net/netlink/af_netlink.c:1302: (allocation & ~__GFP_DIRECT_RECLAIM) |
net/netlink/af_netlink.c:2259: (GFP_KERNEL & ~__GFP_DIRECT_RECLAIM) |

>
> --
> Michal Hocko
> SUSE Labs




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux