Re: [PATCH bpf-next 1/7] bpf: Create links for BPF struct_ops maps.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2/15/23 14:58, Martin KaFai Lau wrote:
On 2/14/23 2:17 PM, Kui-Feng Lee wrote:
BPF struct_ops maps are employed directly to register TCP Congestion
Control algorithms. Unlike other BPF programs that terminate when
their links gone, the struct_ops program reduces its refcount solely
upon death of its FD.

I think the refcount comment probably not needed for this patch.

Got it!

The link of a BPF struct_ops map provides a
uniform experience akin to other types of BPF programs.  A TCP
Congestion Control algorithm will be unregistered upon destroying the
FD in the following patches.


diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 17afd2b35ee5..1e6cdd0f355d 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h

The existing BPF_LINK_TYPE_STRUCT_OPS enum is reused. Please explain why it can be reused in the commit message and also add comments around the existing "bpf_link_init(&link->link, BPF_LINK_TYPE_STRUCT_OPS...)" in bpf_struct_ops_map_update_elem().

Sure!

@@ -1033,6 +1033,7 @@ enum bpf_attach_type {
      BPF_PERF_EVENT,
      BPF_TRACE_KPROBE_MULTI,
      BPF_LSM_CGROUP,
+    BPF_STRUCT_OPS_MAP,

nit. Only BPF_STRUCT_OPS. No need for "_MAP".

      __MAX_BPF_ATTACH_TYPE
  };
@@ -6354,6 +6355,9 @@ struct bpf_link_info {
          struct {
              __u32 ifindex;
          } xdp;
+        struct {
+            __u32 map_id;
+        } struct_ops_map;

nit. Same here, skip the "_map";

Got!


This looks good instead of union. In case the user space tool directly uses the prog_id without checking the type.

      };
  } __attribute__((aligned(8)));
diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
index ece9870cab68..621c8e24481a 100644
--- a/kernel/bpf/bpf_struct_ops.c
+++ b/kernel/bpf/bpf_struct_ops.c
@@ -698,3 +698,69 @@ void bpf_struct_ops_put(const void *kdata)
          call_rcu(&st_map->rcu, bpf_struct_ops_put_rcu);
      }
  }
+
+static void bpf_struct_ops_map_link_release(struct bpf_link *link)
+{
+    if (link->map) {
+        bpf_map_put(link->map);
+        link->map = NULL;
+    }
+}
+
+static void bpf_struct_ops_map_link_dealloc(struct bpf_link *link)
+{
+    kfree(link);
+}
+
+static void bpf_struct_ops_map_link_show_fdinfo(const struct bpf_link *link,
+                        struct seq_file *seq)
+{
+    seq_printf(seq, "map_id:\t%d\n",
+          link->map->id);
+}
+
+static int bpf_struct_ops_map_link_fill_link_info(const struct bpf_link *link,
+                           struct bpf_link_info *info)
+{
+    info->struct_ops_map.map_id = link->map->id;
+    return 0;
+}
+
+static const struct bpf_link_ops bpf_struct_ops_map_lops = {
+    .release = bpf_struct_ops_map_link_release,
+    .dealloc = bpf_struct_ops_map_link_dealloc,
+    .show_fdinfo = bpf_struct_ops_map_link_show_fdinfo,
+    .fill_link_info = bpf_struct_ops_map_link_fill_link_info,
+};

Can .detach be supported also?

Sure!


+
+int link_create_struct_ops_map(union bpf_attr *attr, bpfptr_t uattr)

Does it need uattr?

nit. Rename to bpf_struct_ops_link_attach(), like how other link type's "attach" functions are named. or may be even just bpf_struct_ops_attach().

Got it!


+{
+    struct bpf_link_primer link_primer;
+    struct bpf_map *map;
+    struct bpf_link *link = NULL;
+    int err;
+
+    map = bpf_map_get(attr->link_create.prog_fd);

This one looks weird. passing prog_fd to bpf_map_get(). I think in this case it makes sense to union map_fd with prog_fd in attr->link_create ?

+    if (map->map_type != BPF_MAP_TYPE_STRUCT_OPS)

map is leaked.

Yes, I will fix it.

+        return -EINVAL;
+
+    link = kzalloc(sizeof(*link), GFP_USER);
+    if (!link) {
+        err = -ENOMEM;
+        goto err_out;
+    }
+    bpf_link_init(link, BPF_LINK_TYPE_STRUCT_OPS, &bpf_struct_ops_map_lops, NULL);
+    link->map = map;
+
+    err = bpf_link_prime(link, &link_primer);
+    if (err)
+        goto err_out;
+
+    return bpf_link_settle(&link_primer);
+
+err_out:
+    bpf_map_put(map);
+    kfree(link);
+    return err;
+}
+

[ ... ]

+extern int link_create_struct_ops_map(union bpf_attr *attr, bpfptr_t uattr);

Move it to bpf.h.

+
  #define BPF_LINK_CREATE_LAST_FIELD link_create.kprobe_multi.cookies
  static int link_create(union bpf_attr *attr, bpfptr_t uattr)
  {
@@ -4541,6 +4549,9 @@ static int link_create(union bpf_attr *attr, bpfptr_t uattr)
      if (CHECK_ATTR(BPF_LINK_CREATE))
          return -EINVAL;
+    if (attr->link_create.attach_type == BPF_STRUCT_OPS_MAP)
+        return link_create_struct_ops_map(attr, uattr);
+
      prog = bpf_prog_get(attr->link_create.prog_fd);
      if (IS_ERR(prog))
          return PTR_ERR(prog);
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 17afd2b35ee5..1e6cdd0f355d 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1033,6 +1033,7 @@ enum bpf_attach_type {
      BPF_PERF_EVENT,
      BPF_TRACE_KPROBE_MULTI,
      BPF_LSM_CGROUP,
+    BPF_STRUCT_OPS_MAP,
      __MAX_BPF_ATTACH_TYPE
  };
@@ -6354,6 +6355,9 @@ struct bpf_link_info {
          struct {
              __u32 ifindex;
          } xdp;
+        struct {
+            __u32 map_id;
+        } struct_ops_map;
      };
  } __attribute__((aligned(8)));
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index 9aff98f42a3d..e44d49f17c86 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c

Not necessary in this set and could be a follow up. Have you thought about how to generate a skel including the struct_ops link?

The user must now call bpf_map__set_map_flags() between XXXX__open() and XXX__load(). To simplify the process, skel should invoke bpf_map__set_map_flags() in the function of XXX__open_and _load(). Therefore, a method to indicate which struct_ops need a link is necessary. For instance,

SEC(".struct_ops")
struct tcp_congestion_ops xxx_map = {
 ...
 .flags = BPF_F_LINK,
 ...
};

We probably can do it without any change to the code generator.


@@ -731,6 +731,8 @@ int bpf_link_create(int prog_fd, int target_fd,
          if (!OPTS_ZEROED(opts, tracing))
              return libbpf_err(-EINVAL);
          break;
+    case BPF_STRUCT_OPS_MAP:
+        break;
      default:
          if (!OPTS_ZEROED(opts, flags))
              return libbpf_err(-EINVAL);
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 35a698eb825d..75ed95b7e455 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -115,6 +115,7 @@ static const char * const attach_type_name[] = {
      [BPF_SK_REUSEPORT_SELECT_OR_MIGRATE]    = "sk_reuseport_select_or_migrate",
      [BPF_PERF_EVENT]        = "perf_event",
      [BPF_TRACE_KPROBE_MULTI]    = "trace_kprobe_multi",
+    [BPF_STRUCT_OPS_MAP]        = "struct_ops_map",
  };
  static const char * const link_type_name[] = {




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux