Re: + mm-memcg-percpu-account-extra-objcg-space-to-memory-cgroups.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Nov 27, 2021 at 04:04:26PM -0800, akpm@xxxxxxxxxxxxxxxxxxxx wrote:
> 
> The patch titled
>      Subject: mm: memcg/percpu: account extra objcg space to memory cgroups
> has been added to the -mm tree.  Its filename is
>      mm-memcg-percpu-account-extra-objcg-space-to-memory-cgroups.patch
> 
> This patch should soon appear at
>     https://ozlabs.org/~akpm/mmots/broken-out/mm-memcg-percpu-account-extra-objcg-space-to-memory-cgroups.patch
> and later at
>     https://ozlabs.org/~akpm/mmotm/broken-out/mm-memcg-percpu-account-extra-objcg-space-to-memory-cgroups.patch
> 
> Before you just go and hit "reply", please:
>    a) Consider who else should be cc'ed
>    b) Prefer to cc a suitable mailing list as well
>    c) Ideally: find the original patch on the mailing list and do a
>       reply-to-all to that, adding suitable additional cc's
> 
> *** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
> 
> The -mm tree is included into linux-next and is updated
> there every 3-4 working days
> 
> ------------------------------------------------------
> From: Qi Zheng <zhengqi.arch@xxxxxxxxxxxxx>
> Subject: mm: memcg/percpu: account extra objcg space to memory cgroups
> 
> Similar to slab memory allocator, for each accounted percpu object there
> is an extra space which is used to store obj_cgroup membership.  Charge it
> too.
> 
> Link: https://lkml.kernel.org/r/20211126040606.97836-1-zhengqi.arch@xxxxxxxxxxxxx
> Signed-off-by: Qi Zheng <zhengqi.arch@xxxxxxxxxxxxx>
> Cc: Dennis Zhou <dennis@xxxxxxxxxx>
> Cc: Tejun Heo <tj@xxxxxxxxxx>
> Cc: Christoph Lameter <cl@xxxxxxxxx>
> Cc: Muchun Song <songmuchun@xxxxxxxxxxxxx>
> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> ---
> 
>  mm/percpu-internal.h |   17 +++++++++++++++++
>  mm/percpu.c          |   10 +++++-----
>  2 files changed, 22 insertions(+), 5 deletions(-)
> 
> --- a/mm/percpu.c~mm-memcg-percpu-account-extra-objcg-space-to-memory-cgroups
> +++ a/mm/percpu.c
> @@ -1635,7 +1635,7 @@ static bool pcpu_memcg_pre_alloc_hook(si
>  	if (!objcg)
>  		return true;
>  
> -	if (obj_cgroup_charge(objcg, gfp, size * num_possible_cpus())) {
> +	if (obj_cgroup_charge(objcg, gfp, pcpu_obj_full_size(size))) {
>  		obj_cgroup_put(objcg);
>  		return false;
>  	}
> @@ -1656,10 +1656,10 @@ static void pcpu_memcg_post_alloc_hook(s
>  
>  		rcu_read_lock();
>  		mod_memcg_state(obj_cgroup_memcg(objcg), MEMCG_PERCPU_B,
> -				size * num_possible_cpus());
> +				pcpu_obj_full_size(size));
>  		rcu_read_unlock();
>  	} else {
> -		obj_cgroup_uncharge(objcg, size * num_possible_cpus());
> +		obj_cgroup_uncharge(objcg, pcpu_obj_full_size(size));
>  		obj_cgroup_put(objcg);
>  	}
>  }
> @@ -1676,11 +1676,11 @@ static void pcpu_memcg_free_hook(struct
>  		return;
>  	chunk->obj_cgroups[off >> PCPU_MIN_ALLOC_SHIFT] = NULL;
>  
> -	obj_cgroup_uncharge(objcg, size * num_possible_cpus());
> +	obj_cgroup_uncharge(objcg, pcpu_obj_full_size(size));
>  
>  	rcu_read_lock();
>  	mod_memcg_state(obj_cgroup_memcg(objcg), MEMCG_PERCPU_B,
> -			-(size * num_possible_cpus()));
> +			-pcpu_obj_full_size(size));
>  	rcu_read_unlock();
>  
>  	obj_cgroup_put(objcg);
> --- a/mm/percpu-internal.h~mm-memcg-percpu-account-extra-objcg-space-to-memory-cgroups
> +++ a/mm/percpu-internal.h
> @@ -113,6 +113,23 @@ static inline int pcpu_chunk_map_bits(st
>  	return pcpu_nr_pages_to_map_bits(chunk->nr_pages);
>  }
>  
> +#ifdef CONFIG_MEMCG_KMEM
> +/**
> + * pcpu_obj_full_size - helper to calculate size of each accounted object
> + * @size: size of area to allocate in bytes
> + *
> + * For each accounted object there is an extra space which is used to store
> + * obj_cgroup membership. Charge it too.
> + */
> +static inline size_t pcpu_obj_full_size(size_t size)
> +{
> +	size_t extra_size =
> +		size / PCPU_MIN_ALLOC_SIZE * sizeof(struct obj_cgroup *);
> +
> +	return size * num_possible_cpus() + extra_size;
> +}
> +#endif /* CONFIG_MEMCG_KMEM */
> +
>  #ifdef CONFIG_PERCPU_STATS
>  
>  #include <linux/spinlock.h>
> _
> 
> Patches currently in -mm which might be from zhengqi.arch@xxxxxxxxxxxxx are
> 
> mm-remove-redundant-check-about-fault_flag_allow_retry-bit.patch
> mm-memcg-percpu-account-extra-objcg-space-to-memory-cgroups.patch
> 

Hi Andrew,

I understand I've been a bit slow and it's easy for you to pull
relatively small changes, so please add:

Acked-by: Dennis Zhou <dennis@xxxxxxxxxx>

But I don't think I've gotten resolution on what's going on with mm/cpu
hotplug in [1]. Can we continue that conversation over there?

[1] https://lore.kernel.org/mm-commits/20211108205031.UxDPHBZWa%25akpm@xxxxxxxxxxxxxxxxxxxx/

Thanks,
Dennis



[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux