Re: [PATCH bpf-next 2/4] bpf: Make bpf inode storage available to tracing program

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Christian, 

> On Nov 13, 2024, at 2:19 AM, Christian Brauner <brauner@xxxxxxxxxx> wrote:

[...]

>> static inline void bpf_lsm_find_cgroup_shim(const struct bpf_prog *prog,
>>   bpf_func_t *bpf_func)
>> {
>> diff --git a/include/linux/fs.h b/include/linux/fs.h
>> index 3559446279c1..479097e4dd5b 100644
>> --- a/include/linux/fs.h
>> +++ b/include/linux/fs.h
>> @@ -79,6 +79,7 @@ struct fs_context;
>> struct fs_parameter_spec;
>> struct fileattr;
>> struct iomap_ops;
>> +struct bpf_local_storage;
>> 
>> extern void __init inode_init(void);
>> extern void __init inode_init_early(void);
>> @@ -648,6 +649,9 @@ struct inode {
>> #ifdef CONFIG_SECURITY
>> void *i_security;
>> #endif
>> +#ifdef CONFIG_BPF_SYSCALL
>> + struct bpf_local_storage __rcu *i_bpf_storage;
>> +#endif
> 
> Sorry, we're not growing struct inode for this. It just keeps getting
> bigger. Last cycle we freed up 8 bytes to shrink it and we're not going
> to waste them on special-purpose stuff. We already NAKed someone else's
> pet field here.

Per other discussions in this thread, I am implementing the following:

#ifdef CONFIG_SECURITY
        void                    *i_security;
#elif CONFIG_BPF_SYSCALL
        struct bpf_local_storage __rcu *i_bpf_storage;
#endif

However, it is a bit trickier than I thought. Specifically, we need 
to deal with the following scenarios:
 
1. CONFIG_SECURITY=y && CONFIG_BPF_LSM=n && CONFIG_BPF_SYSCALL=y
2. CONFIG_SECURITY=y && CONFIG_BPF_LSM=y && CONFIG_BPF_SYSCALL=y but 
   bpf lsm is not enabled at boot time. 

AFAICT, we need to modify how lsm blob are managed with 
CONFIG_BPF_SYSCALL=y && CONFIG_BPF_LSM=n case. The solution, even
if it gets accepted, doesn't really save any memory. Instead of 
growing struct inode by 8 bytes, the solution will allocate 8
more bytes to inode->i_security. So the total memory consumption
is the same, but the memory is more fragmented. 

Therefore, I think we should really step back and consider adding
the i_bpf_storage to struct inode. While this does increase the
size of struct inode by 8 bytes, it may end up with less overall
memory consumption for the system. This is why. 

When the user cannot use inode local storage, the alternative is 
to use hash maps (use inode pointer as key). AFAICT, all hash maps 
comes with non-trivial overhead, in memory consumption, in access 
latency, and in extra code to manage the memory. OTOH, inode local 
storage doesn't have these issue, and is usually much more efficient: 
 - memory is only allocated for inodes with actual data, 
 - O(1) latency, 
 - per inode data is freed automatically when the inode is evicted. 
Please refer to [1] where Amir mentioned all the work needed to 
properly manage a hash map, and I explained why we don't need to 
worry about these with inode local storage. 

Besides reducing memory consumption, i_bpf_storage also shortens 
the pointer chain to access inode local storage. Before this set, 
inode local storage is available at 
inode->i_security+offset(struct bpf_storage_blob)->storage. After 
this set, inode local storage is simply at inode->i_bpf_storage. 

At the moment, we are using bpf local storage with 
task_struct->bpf_storage, struct sock->sk_bpf_storage, 
struct cgroup->bpf_cgrp_storage. All of these turned out to be 
successful and helped users to use memory more efficiently. I 
think we can see the same benefits with struct inode->i_bpf_storage. 

I hope these make sense, and you will consider adding i_bpf_storage.
Please let me know if anything above is not clear. 

Thanks,
Song

[1] https://lore.kernel.org/linux-fsdevel/CAOQ4uxjXjjkKMa1xcPyxE5vxh1U5oGZJWtofRCwp-3ikCA6cgg@xxxxxxxxxxxxxx/






[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux