Re: [PATCH v2 bpf-next 1/7] bpf: Optimize program stats

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/9/21 12:13 AM, Alexei Starovoitov wrote:
On 2/8/21 1:28 PM, Daniel Borkmann wrote:
On 2/6/21 6:03 PM, Alexei Starovoitov wrote:
[...]
@@ -539,6 +540,12 @@ struct bpf_binary_header {
      u8 image[] __aligned(BPF_IMAGE_ALIGNMENT);
  };
+struct bpf_prog_stats {
+    u64 cnt;
+    u64 nsecs;
+    struct u64_stats_sync syncp;
+} __aligned(2 * sizeof(u64));
+
  struct bpf_prog {
      u16            pages;        /* Number of allocated pages */
      u16            jited:1,    /* Is our filter JIT'ed? */
@@ -559,6 +566,7 @@ struct bpf_prog {
      u8            tag[BPF_TAG_SIZE];
      struct bpf_prog_aux    *aux;        /* Auxiliary fields */
      struct sock_fprog_kern    *orig_prog;    /* Original BPF program */
+    struct bpf_prog_stats __percpu *stats;
      unsigned int        (*bpf_func)(const void *ctx,
                          const struct bpf_insn *insn);

nit: could we move aux & orig_prog while at it behind bpf_func just to avoid it slipping
into next cacheline by accident when someone extends this again?

I don't understand what moving aux+orig_prog after bpf_func will do.
Currently it's this:
struct bpf_prog_aux *      aux;                  /*    32     8 */
struct sock_fprog_kern *   orig_prog;            /*    40     8 */
unsigned int               (*bpf_func)(const void  *, const struct bpf_insn  *); /*    48     8 */

With stats and active pointers the bpf_func goes into 2nd cacheline.
In the past the motivation for bpf_func right next to insns were
due to interpreter. Now everyone has JIT on. The interpreter
is often removed from .text too. So having insn and bpf_func in
the same cache line is not important.

Yeah that's not what I meant, the interpreter case is less important.

Whereas having bpf_func with stats and active could be important
if stats/active are also used in other places than fexit/fentry.
For this patch set bpf_func location is irrelevant, since the
prog addr is hardcoded inside bpf trampoline generated asm.
For the best speed only stats and active should be close to each other.
And they both will be in the 1st.

I meant to say that it's zero effort to move aux/orig_prog behind the
bpf_func, so stats/active/bpf_func can still be on same cacheline. Yes,
it's potentially less important with dispatcher being up to 64, but
still more relevant to fast path than aux/orig_prog. Also for non-x86
case.

@@ -249,7 +249,7 @@ void __bpf_prog_free(struct bpf_prog *fp)
      if (fp->aux) {
          mutex_destroy(&fp->aux->used_maps_mutex);
          mutex_destroy(&fp->aux->dst_mutex);
-        free_percpu(fp->aux->stats);
+        free_percpu(fp->stats);

This doesn't look correct, stats is now /not/ tied to fp->aux anymore which this if block
is taking care of freeing. It needs to be moved outside so we don't leak fp->stats.

Great catch. thanks!




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux