On Tue, Mar 23, 2021 at 4:21 AM Alexei Starovoitov <alexei.starovoitov@xxxxxxxxx> wrote: > > On Wed, Mar 10, 2021 at 11:02:08PM +0100, Florent Revest wrote: > > > > +struct bpf_snprintf_buf { > > + char buf[MAX_SNPRINTF_MEMCPY][MAX_SNPRINTF_STR_LEN]; > > +}; > > +static DEFINE_PER_CPU(struct bpf_snprintf_buf, bpf_snprintf_buf); > > +static DEFINE_PER_CPU(int, bpf_snprintf_buf_used); > > + > > +BPF_CALL_5(bpf_snprintf, char *, out, u32, out_size, char *, fmt, u64 *, args, > > + u32, args_len) > > +{ > > + int err, i, buf_used, copy_size, fmt_cnt = 0, memcpy_cnt = 0; > > + u64 params[MAX_SNPRINTF_VARARGS]; > > + struct bpf_snprintf_buf *bufs; > > + > > + buf_used = this_cpu_inc_return(bpf_snprintf_buf_used); > > + if (WARN_ON_ONCE(buf_used > 1)) { > > this can trigger only if the helper itself gets preempted and > another bpf prog will run on the same cpu and will call into this helper > again, right? > If so, how about adding preempt_disable here to avoid this case? Ah, neat, that sounds like a good idea indeed. This was really just cargo-culted from bpf_seq_printf but as part of my grand unification attempt for the various printf-like helpers, I can try to make it use preempt_disable as well yes. > It won't prevent the case where kprobe is inside snprintf core, > so the counter is still needed, but it wouldn't trigger by accident. Good point, I will keep it around then. > Also since bufs are not used always, how about grabbing the > buffers only when %p or %s are seen in fmt? > After snprintf() is done it would conditionally do: > if (bufs_were_used) { > this_cpu_dec(bpf_snprintf_buf_used); > preempt_enable(); > } > This way simple bpf_snprintf won't ever hit EBUSY. Absolutely, it would be nice. :) > > + err = -EBUSY; > > + goto out; > > + } > > + > > + bufs = this_cpu_ptr(&bpf_snprintf_buf); > > + > > + /* > > + * The verifier has already done most of the heavy-work for us in > > + * check_bpf_snprintf_call. We know that fmt is well formatted and that > > + * args_len is valid. The only task left is to convert some of the > > + * arguments. For the %s and %pi* specifiers, we need to read buffers > > + * from a kernel address during the helper call. > > + */ > > + for (i = 0; fmt[i] != '\0'; i++) { > > + if (fmt[i] != '%') > > + continue; > > + > > + if (fmt[i + 1] == '%') { > > + i++; > > + continue; > > + } > > + > > + /* fmt[i] != 0 && fmt[last] == 0, so we can access fmt[i + 1] */ > > + i++; > > + > > + /* skip optional "[0 +-][num]" width formating field */ > > + while (fmt[i] == '0' || fmt[i] == '+' || fmt[i] == '-' || > > + fmt[i] == ' ') > > + i++; > > + if (fmt[i] >= '1' && fmt[i] <= '9') { > > + i++; > > + while (fmt[i] >= '0' && fmt[i] <= '9') > > + i++; > > + } > > + > > + if (fmt[i] == 's') { > > + void *unsafe_ptr = (void *)(long)args[fmt_cnt]; > > + > > + err = strncpy_from_kernel_nofault(bufs->buf[memcpy_cnt], > > + unsafe_ptr, > > + MAX_SNPRINTF_STR_LEN); > > + if (err < 0) > > + bufs->buf[memcpy_cnt][0] = '\0'; > > + params[fmt_cnt] = (u64)(long)bufs->buf[memcpy_cnt]; > > how about: > char buf[512]; instead? > instead of memcpy_cnt++ remember how many bytes of the buf were used and > copy next arg after that. > The scratch space would be used more efficiently. > The helper would potentially return ENOSPC if the first string printed via %s > consumed most of the 512 space and the second string doesn't fit. > But the verifier-time if (memcpy_cnt >= MAX_SNPRINTF_MEMCPY) can be removed. > Ten small %s will work fine. Cool! That is also a good idea :) > We can allocate a page per-cpu when this helper is used by prog and free > that page when all progs with bpf_snprintf are unloaded. > But extra complexity is probably not worth it. I would start with 512 per-cpu. > It's going to be enough for most users. Yes, let's maybe keep that for later. I think there is already enough complexity going into the printf-like helpers unification patch. > Overall looks great. Cannot wait for v2 :) Ahah wait until you see that patch! :D