Re: [PATCH 1/3] bpf: Use spinlock_t in bpf_lru_list

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/10/2019 07:08 PM, Yonghong Song wrote:
> On 4/10/19 7:30 AM, Sebastian Andrzej Siewior wrote:
>> There is no difference between spinlock_t and raw_spinlock_t for !RT
>> kernels. It is possible that bpf_lru_list_pop_free_to_local() invokes
>> at least three list walks which look unbounded it probably makes sense
>> to use spinlock_t.
>>
>> Make bpf_lru_list use a spinlock_t.
> 
> Could you add a cover letter for the patch set since you have
> more than one patch?
> 
>> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
>> ---
>>   kernel/bpf/bpf_lru_list.c | 38 +++++++++++++++++++-------------------
>>   kernel/bpf/bpf_lru_list.h |  4 ++--
>>   2 files changed, 21 insertions(+), 21 deletions(-)
>>
>> diff --git a/kernel/bpf/bpf_lru_list.c b/kernel/bpf/bpf_lru_list.c
>> index e6ef4401a1380..40f47210c3817 100644
>> --- a/kernel/bpf/bpf_lru_list.c
>> +++ b/kernel/bpf/bpf_lru_list.c
>> @@ -313,9 +313,9 @@ static void bpf_lru_list_push_free(struct bpf_lru_list *l,
>>   	if (WARN_ON_ONCE(IS_LOCAL_LIST_TYPE(node->type)))
>>   		return;
>>   
>> -	raw_spin_lock_irqsave(&l->lock, flags);
>> +	spin_lock_irqsave(&l->lock, flags);
>>   	__bpf_lru_node_move(l, node, BPF_LRU_LIST_T_FREE);
>> -	raw_spin_unlock_irqrestore(&l->lock, flags);
>> +	spin_unlock_irqrestore(&l->lock, flags);
>>   }
> 
> This function (plus many others) is called by bpf program helpers which 
> cannot sleep. Is it possible that under RT spin_lock_irqsave() could 
> sleep and this will make bpf subsystem cannot be used under RT?

Back then in ac00881f9221 ("bpf: convert hashtab lock to raw lock")
Yang Shi converted spinlock_t to raw_spinlock_t due to exactly the
above mentioned issue. I presume that hasn't changed, right?

>>   static void bpf_lru_list_pop_free_to_local(struct bpf_lru *lru,
>> @@ -325,7 +325,7 @@ static void bpf_lru_list_pop_free_to_local(struct bpf_lru *lru,
>>   	struct bpf_lru_node *node, *tmp_node;
>>   	unsigned int nfree = 0;
>>   
>> -	raw_spin_lock(&l->lock);
>> +	spin_lock(&l->lock);
>>   
>>   	__local_list_flush(l, loc_l);
>>   
>> @@ -344,7 +344,7 @@ static void bpf_lru_list_pop_free_to_local(struct bpf_lru *lru,
>>   				      local_free_list(loc_l),
>>   				      BPF_LRU_LOCAL_LIST_T_FREE);
>>   
>> -	raw_spin_unlock(&l->lock);
>> +	spin_unlock(&l->lock);
>>   }
>>   
>>   static void __local_list_add_pending(struct bpf_lru *lru,
> [...]
>>   static void bpf_lru_list_init(struct bpf_lru_list *l)
>> @@ -642,7 +642,7 @@ static void bpf_lru_list_init(struct bpf_lru_list *l)
>>   
>>   	l->next_inactive_rotation = &l->lists[BPF_LRU_LIST_T_INACTIVE];
>>   
>> -	raw_spin_lock_init(&l->lock);
>> +	spin_lock_init(&l->lock);
>>   }
>>   
>>   int bpf_lru_init(struct bpf_lru *lru, bool percpu, u32 hash_offset,
>> diff --git a/kernel/bpf/bpf_lru_list.h b/kernel/bpf/bpf_lru_list.h
>> index 7d4f89b7cb841..4e1e4608f1bb0 100644
>> --- a/kernel/bpf/bpf_lru_list.h
>> +++ b/kernel/bpf/bpf_lru_list.h
>> @@ -36,13 +36,13 @@ struct bpf_lru_list {
>>   	/* The next inacitve list rotation starts from here */
>>   	struct list_head *next_inactive_rotation;
>>   
>> -	raw_spinlock_t lock ____cacheline_aligned_in_smp;
>> +	spinlock_t lock ____cacheline_aligned_in_smp;
>>   };
>>   
>>   struct bpf_lru_locallist {
>>   	struct list_head lists[NR_BPF_LRU_LOCAL_LIST_T];
>>   	u16 next_steal;
>> -	raw_spinlock_t lock;
>> +	spinlock_t lock;
>>   };
>>   
>>   struct bpf_common_lru {
>>




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux