Re: [PATCH v2 bpf-next 1/4] bpf: Search for kptrs in prog BTF structs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 8/5/2024 12:31 PM, Amery Hung wrote:
> On Sun, Aug 4, 2024 at 7:41 PM Hou Tao <houtao@xxxxxxxxxxxxxxx> wrote:
>> Hi,
>>
>> On 8/3/2024 8:11 AM, Amery Hung wrote:
>>> From: Dave Marchevsky <davemarchevsky@xxxxxx>
>>>
>>> Currently btf_parse_fields is used in two places to create struct
>>> btf_record's for structs: when looking at mapval type, and when looking
>>> at any struct in program BTF. The former looks for kptr fields while the
>>> latter does not. This patch modifies the btf_parse_fields call made when
>>> looking at prog BTF struct types to search for kptrs as well.
>>>
>> SNIP
>>> On a side note, when building program BTF, the refcount of program BTF
>>> is now initialized before btf_parse_struct_metas() to prevent a
>>> refcount_inc() on zero warning. This happens when BPF_KPTR is present
>>> in program BTF: btf_parse_struct_metas() -> btf_parse_fields()
>>> -> btf_parse_kptr() -> btf_get(). This should be okay as the program BTF
>>> is not available yet outside btf_parse().
>> If btf_parse_kptr() pins the passed btf, there will be memory leak for
>> the btf after closing the btf fd, because the invocation of btf_put()
>> for kptr record in btf->struct_meta_tab depends on the invocation of
>> btf_free_struct_meta_tab() in btf_free(), but the invocation of
>> btf_free() depends the final refcnt of the btf is released, so the btf
>> will not be freed forever. The reason why map value doesn't have such
>> problem is that the invocation of btf_put() for kptr record doesn't
>> depends on the release of map value btf and it is accomplished by
>> bpf_map_free_record().
>>
> Thanks for pointing this out. It makes sense to me.
>
>> Maybe we should move the common btf used by kptr and graph_root into
>> btf_record and let the callers of btf_parse_fields() and
>> btf_record_free() to decide the life cycle of btf in btf_record.
> Could you maybe explain if and why moving btf of btf_field_kptr and
> btf_field_graph_root to btf_record is necessary? I think letting
> callers of btf_parse_fields() and btf_record_free() decide whether or
> not to change refcount should be enough. Besides, I personally would
> like to keep individual btf in btf_field_kptr and
> btf_field_graph_root, so that later we can have special fields
> referencing different btf.

Sorry, I didn't express the rough idea clearly enough. I didn't mean to
move btf of btf_field_kptr and btf_field_graph_root to btf_record,
because there are other btf-s which are different with the btf which
creates the struct_meta_tab. What I was trying to suggest is to save one
btf in btf_record and hope it will simplify the pin and the unpin of btf
in btf_record:

1) save the btf which owns the btf_record in btf_record.
2) during btf_parse_kptr() or similar, if the used btf is the same as
the btf in btf_record, there is no need to pin the btf
3) when freeing the btf_record, if the btf saved in btf_field is the
same as the btf in btf_record, there is no need to put it

For step 2) and step 3), however I think it is also doable through other
ways (e.g., pass the btf to btf_record_free or similar).
>
> Thanks,
> Amery
>
>>> Acked-by: Martin KaFai Lau <martin.lau@xxxxxxxxxx>
>>> Signed-off-by: Dave Marchevsky <davemarchevsky@xxxxxx>
>>> Signed-off-by: Amery Hung <amery.hung@xxxxxxxxxxxxx>
>>> ---
>>>  kernel/bpf/btf.c | 6 ++++--
>>>  1 file changed, 4 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
>>> index 95426d5b634e..7b8275e3e500 100644
>>> --- a/kernel/bpf/btf.c
>>> +++ b/kernel/bpf/btf.c
>>> @@ -5585,7 +5585,8 @@ btf_parse_struct_metas(struct bpf_verifier_log *log, struct btf *btf)
>>>               type = &tab->types[tab->cnt];
>>>               type->btf_id = i;
>>>               record = btf_parse_fields(btf, t, BPF_SPIN_LOCK | BPF_LIST_HEAD | BPF_LIST_NODE |
>>> -                                               BPF_RB_ROOT | BPF_RB_NODE | BPF_REFCOUNT, t->size);
>>> +                                               BPF_RB_ROOT | BPF_RB_NODE | BPF_REFCOUNT |
>>> +                                               BPF_KPTR, t->size);
>>>               /* The record cannot be unset, treat it as an error if so */
>>>               if (IS_ERR_OR_NULL(record)) {
>>>                       ret = PTR_ERR_OR_ZERO(record) ?: -EFAULT;
>>> @@ -5737,6 +5738,8 @@ static struct btf *btf_parse(const union bpf_attr *attr, bpfptr_t uattr, u32 uat
>>>       if (err)
>>>               goto errout;
>>>
>>> +     refcount_set(&btf->refcnt, 1);
>>> +
>>>       struct_meta_tab = btf_parse_struct_metas(&env->log, btf);
>>>       if (IS_ERR(struct_meta_tab)) {
>>>               err = PTR_ERR(struct_meta_tab);
>>> @@ -5759,7 +5762,6 @@ static struct btf *btf_parse(const union bpf_attr *attr, bpfptr_t uattr, u32 uat
>>>               goto errout_free;
>>>
>>>       btf_verifier_env_free(env);
>>> -     refcount_set(&btf->refcnt, 1);
>>>       return btf;
>>>
>>>  errout_meta:
> .





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux