Re: [PATCH bpf-next v2 0/2] bpf: Fix to preserve reg parent/live fields when copying range info

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 31, 2023 at 12:29 AM Eduard Zingerman <eddyz87@xxxxxxxxx> wrote:
>
> On Mon, 2023-01-30 at 18:42 -0800, Alexei Starovoitov wrote:
> [...]
> > > >
> > > > Hi Alexei, Andrii,
> > > >
> > > > Please note that the patch
> > > > "bpf: Fix to preserve reg parent/live fields when copying range info"
> > > > that started this conversation was applied to `bpf` tree, not `bpf-next`,
> > > > so I'll wait until it gets its way to `bpf-next` before submitting formal
> > > > patches, as it changes the performance numbers collected by veristat.
> > > > I did all my experiments with this patch applied on top of `bpf-next`.
> > > >
> > > > I adapted the patch suggested by Alexei and put it to my github for
> > > > now [1]. The performance gains are indeed significant:
> > > >
> > > > $ ./veristat -e file,states -C -f 'states_pct<-30' master.log uninit-reads.log
> > > > File                        States (A)  States (B)  States    (DIFF)
> > > > --------------------------  ----------  ----------  ----------------
> > > > bpf_host.o                         349         244    -105 (-30.09%)
> > > > bpf_host.o                        1320         895    -425 (-32.20%)
> > > > bpf_lxc.o                         1320         895    -425 (-32.20%)
> > > > bpf_sock.o                          70          48     -22 (-31.43%)
> > > > bpf_sock.o                          68          46     -22 (-32.35%)
> > > > bpf_xdp.o                         1554         803    -751 (-48.33%)
> > > > bpf_xdp.o                         6457        2473   -3984 (-61.70%)
> > > > bpf_xdp.o                         7249        3908   -3341 (-46.09%)
> > > > pyperf600_bpf_loop.bpf.o           287         145    -142 (-49.48%)
> > > > strobemeta.bpf.o                 15879        4790  -11089 (-69.83%)
> > > > strobemeta_nounroll2.bpf.o       20505        3931  -16574 (-80.83%)
> > > > xdp_synproxy_kern.bpf.o          22564        7009  -15555 (-68.94%)
> > > > xdp_synproxy_kern.bpf.o          24206        6941  -17265 (-71.33%)
> > > > --------------------------  ----------  ----------  ----------------
> > > >
> > > > However, this comes at a cost of allowing reads from uninitialized
> > > > stack locations. As far as I understand access to uninitialized local
> > > > variable is one of the most common errors when programming in C
> > > > (although citation is needed).
> > >
> > > Yeah, a citation is really needed :) I don't see this often in
> > > practice, tbh. What I do see in practice is that people are
> > > unnecessarily __builtint_memset(0) struct and initialize all fields
> > > with field-by-field initialization, instead of just using a nice C
> > > syntax:
> > >
> > > struct my_struct s = {
> > >    .field_a = 123,
> > >    .field_b = 234,
> > > };
> > >
> > >
> > > And all that just because there is some padding between field_a and
> > > field_b which the compiler won't zero-initialize.
>
> Andrii, do you have such example somewhere? If the use case is to pass
> 's' to some helper function it should already work if function
> argument type is 'ARG_PTR_TO_UNINIT_MEM'. Handled by the following
> code in the verifier.c:check_stack_range_initialized():
>
>         ...
>         if (meta && meta->raw_mode) {
>                 ...
>                 meta->access_size = access_size;
>                 meta->regno = regno;
>                 return 0;
>         }
>
> But only for fixed stack offsets. I'm asking because it might point to
> some bug.

I don't have specifics at hand, but there were at least few different
internal cases where this popped up. And just grepping code base, we
have tons of

__builtin_memset(&event, 0, sizeof(event));

in our code base, even if struct initialization would be more convenient.

I don't remember how exactly that struct was used, but one of the use
cases was passing it to bpf_perf_event_output(), which expects
initialized memory.

>
> > >
> > > >
> > > > Also more tests are failing after register parentage chains patch is
> > > > applied than in Alexei's initial try: 10 verifier tests and 1 progs
> > > > test (test_global_func10.c, I have not modified it yet, it should wait
> > > > for my changes for unprivileged execution mode support in
> > > > test_loader.c). I don't really like how I had to fix those tests.
> > > >
> > > > I took a detailed look at the difference in verifier behavior between
> > > > master and the branch [1] for pyperf600_bpf_loop.bpf.o and identified
> > > > that the difference is caused by the fact that helper functions do not
> > > > mark the stack they access as REG_LIVE_WRITTEN, the details are in the
> > > > commit message [3], but TLDR is the following example:
> > > >
> > > >         1: bpf_probe_read_user(&foo, ...);
> > > >         2: if (*foo) ...
> > > >
> > > > Here `*foo` will not get REG_LIVE_WRITTEN mark when (1) is verified,
> > > > thus `*foo` read at (2) might lead to excessive REG_LIVE_READ marks
> > > > and thus more verification states.
> > >
> > > This is a good fix in its own right, of course, we should definitely do this!
> >
> > +1
> >
> > > >
> > > > I prepared a patch that changes helper calls verification to apply
> > > > REG_LIVE_WRITTEN when write size and alignment allow this, again
> > > > currently on my github [2]. This patch has less dramatic performance
> > > > impact, but nonetheless significant:
> > > >
> > > > $ veristat -e file,states -C -f 'states_pct<-30' master.log helpers-written.log
> > > > File                        States (A)  States (B)  States    (DIFF)
> > > > --------------------------  ----------  ----------  ----------------
> > > > pyperf600_bpf_loop.bpf.o           287         156    -131 (-45.64%)
> > > > strobemeta.bpf.o                 15879        4772  -11107 (-69.95%)
> > > > strobemeta_nounroll1.bpf.o        2065        1337    -728 (-35.25%)
> > > > strobemeta_nounroll2.bpf.o       20505        3788  -16717 (-81.53%)
> > > > test_cls_redirect.bpf.o           8129        4799   -3330 (-40.96%)
> > > > --------------------------  ----------  ----------  ----------------
> > > >
> > > > I suggest that instead of dropping a useful safety check I can further
> > > > investigate difference in behavior between "uninit-reads.log" and
> > > > "helpers-written.log" and maybe figure out other improvements.
> > > > Unfortunately the comparison process is extremely time consuming.
> > > >
> > > > wdyt?
> > >
> > > I think reading uninitialized stack slot concerns are overblown in
> > > practice (in terms of their good implications for programmer's
> > > productivity), I'd still do it if only in the name of improving user
> > > experience.
> >
> > +1
> > Let's do both (REG_LIVE_WRITTEN for helpers and allow uninit).
> >
> > Uninit access should be caught by the compiler.
> > The verifier is repeating the check for historical reasons when we
> > tried to make it work for unpriv.
> > Allow uninit won't increase the number of errors in bpf progs.
>
> Thank you for the feedback. I'll submit both patches when
> "bpf: Fix to preserve reg parent/live fields when copying range info"
> will get to bpf-next.
>
> Thanks,
> Eduard.



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux