Re: [PATCH bpf-next 2/4] bpf: Make bpf inode storage available to tracing program

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 19, 2024 at 10:53 PM Song Liu <songliubraving@xxxxxxxx> wrote:
>
> Hi Jeff and Amir,
>
> Thanks for your inputs!
>
> > On Nov 19, 2024, at 7:30 AM, Amir Goldstein <amir73il@xxxxxxxxx> wrote:
> >
> > On Tue, Nov 19, 2024 at 4:25 PM Amir Goldstein <amir73il@xxxxxxxxx> wrote:
> >>
> >> On Tue, Nov 19, 2024 at 3:21 PM Jeff Layton <jlayton@xxxxxxxxxx> wrote:
> >>>
>
> [...]
>
> >>> Longer term, I think it may be beneficial to come up with a way to attach
> >>>>> private info to the inode in a way that doesn't cost us one pointer per
> >>>>> funcionality that may possibly attach info to the inode. We already have
> >>>>> i_crypt_info, i_verity_info, i_flctx, i_security, etc. It's always a tough
> >>>>> call where the space overhead for everybody is worth the runtime &
> >>>>> complexity overhead for users using the functionality...
> >>>>
> >>>> It does seem to be the right long term solution, and I am willing to
> >>>> work on it. However, I would really appreciate some positive feedback
> >>>> on the idea, so that I have better confidence my weeks of work has a
> >>>> better chance to worth it.
> >>>>
> >>>> Thanks,
> >>>> Song
> >>>>
> >>>> [1] https://github.com/systemd/systemd/blob/main/src/core/bpf/restrict_fs/restrict-fs.bpf.c
> >>>
> >>> fsnotify is somewhat similar to file locking in that few inodes on the
> >>> machine actually utilize these fields.
> >>>
> >>> For file locking, we allocate and populate the inode->i_flctx field on
> >>> an as-needed basis. The kernel then hangs on to that struct until the
> >>> inode is freed.
>
> If we have some universal on-demand per-inode memory allocator,
> I guess we can move i_flctx to it?
>
> >>> We could do something similar here. We have this now:
> >>>
> >>> #ifdef CONFIG_FSNOTIFY
> >>>        __u32                   i_fsnotify_mask; /* all events this inode cares about */
> >>>        /* 32-bit hole reserved for expanding i_fsnotify_mask */
> >>>        struct fsnotify_mark_connector __rcu    *i_fsnotify_marks;
> >>> #endif
>
> And maybe some fsnotify fields too?
>
> With a couple users, I think it justifies to have some universal
> on-demond allocator.
>
> >>> What if you were to turn these fields into a pointer to a new struct:
> >>>
> >>>        struct fsnotify_inode_context {
> >>>                struct fsnotify_mark_connector __rcu    *i_fsnotify_marks;
> >>>                struct bpf_local_storage __rcu          *i_bpf_storage;
> >>>                __u32                                   i_fsnotify_mask; /* all events this inode cares about */
> >>>        };
> >>>
> >>
> >> The extra indirection is going to hurt for i_fsnotify_mask
> >> it is being accessed frequently in fsnotify hooks, so I wouldn't move it
> >> into a container, but it could be moved to the hole after i_state.
>
> >>> Then whenever you have to populate any of these fields, you just
> >>> allocate one of these structs and set the inode up to point to it.
> >>> They're tiny too, so don't bother freeing it until the inode is
> >>> deallocated.
> >>>
> >>> It'd mean rejiggering a fair bit of fsnotify code, but it would give
> >>> the fsnotify code an easier way to expand per-inode info in the future.
> >>> It would also slightly shrink struct inode too.
>
> I am hoping to make i_bpf_storage available to tracing programs.
> Therefore, I would rather not limit it to fsnotify context. We can
> still use the universal on-demand allocator.
>
> >>
> >> This was already done for s_fsnotify_marks, so you can follow the recipe
> >> of 07a3b8d0bf72 ("fsnotify: lazy attach fsnotify_sb_info state to sb")
> >> and create an fsnotify_inode_info container.
> >>
> >
> > On second thought, fsnotify_sb_info container is allocated and attached
> > in the context of userspace adding a mark.
> >
> > If you will need allocate and attach fsnotify_inode_info in the content of
> > fast path fanotify hook in order to add the inode to the map, I don't
> > think that is going to fly??
>
> Do you mean we may not be able to allocate memory in the fast path
> hook? AFAICT, the fast path is still in the process context, so I
> think this is not a problem?

Right. that should be ok.

Thanks,
Amir.





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux