Re: [PATCH v4 bpf-next 01/15] bpf: Introduce any context BPF specific memory allocator.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 25, 2022 at 07:44:16PM -0700, Alexei Starovoitov wrote:
> +/* Mostly runs from irq_work except __init phase. */
> +static void alloc_bulk(struct bpf_mem_cache *c, int cnt, int node)
> +{
> +	struct mem_cgroup *memcg = NULL, *old_memcg;
> +	unsigned long flags;
> +	void *obj;
> +	int i;
> +
> +	memcg = get_memcg(c);
> +	old_memcg = set_active_memcg(memcg);
> +	for (i = 0; i < cnt; i++) {
> +		obj = __alloc(c, node);
> +		if (!obj)
> +			break;
> +		if (IS_ENABLED(CONFIG_PREEMPT_RT))
> +			/* In RT irq_work runs in per-cpu kthread, so disable
> +			 * interrupts to avoid preemption and interrupts and
> +			 * reduce the chance of bpf prog executing on this cpu
> +			 * when active counter is busy.
> +			 */
> +			local_irq_save(flags);
> +		if (local_inc_return(&c->active) == 1) {
Is it because it is always '== 1' here so that there is
no need to free the obj when it is '!= 1' ?

> +			__llist_add(obj, &c->free_llist);
> +			c->free_cnt++;
> +		}
> +		local_dec(&c->active);
> +		if (IS_ENABLED(CONFIG_PREEMPT_RT))
> +			local_irq_restore(flags);
> +	}
> +	set_active_memcg(old_memcg);
> +	mem_cgroup_put(memcg);
> +}



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux