On Mon, Dec 20, 2021 at 01:26:03AM +0530, Kumar Kartikeya Dwivedi wrote: > > > > The goal is clear now, but look at it differently: > > struct nf_conn *ct = bpf_xdp_ct_lookup(...); > > if (ct) { > > struct nf_conn *master = ct->master; > > struct net *net = ct->ct_net.net; > > > > bpf_ct_release(ct); > > master->status; // prevent this ? > > net->ifindex; // but allow this ? > > I think both will be prevented with the current logic, no? > net will be ct + offset, so if mark_btf_ld_reg writes PTR_TO_BTF_ID to dst_reg > for net, it will copy ct's reg's ref_obj_id to parent_ref_obj_id of dst_reg (net). > Then on release of ct, net's reg gets killed too since reg[ct]->ref_obj_id > matches its parent_ref_obj_id. Excatly, but it should be allowed. There is nothing wrong with 'net' access after ct_release. > > } > > The verifier cannot statically check this. That's why all such deref > > are done via BPF_PROBE_MEM (which is the same as probe_read_kernel). > > We must disallow use after free when it can cause a crash. > > This case is not the one. > > That is a valid point, this is certainly in 'nice to have/prevents obvious > misuse' territory, but if this can be done without introducing too much > complexity, I'd like us to do it. > > A bit of a digression, but: > I'm afraid this patch is going to be brought up again for a future effort > related to XDP queueing that Toke is working on. We have a similar scenario > there, when xdp_md (aliasing xdp_frame) is dequeued from the PIFO map, and > PTR_TO_PACKET is obtained by reading xdp_md->data. The xdp_md is referenced, so > we need to invalidate these pkt pointers as well, in addition to killing xdp_md > copies. Also this parent_ref_obj_id state allows us to reject comparisons > between pkt pointers pointing into different xdp_md's (when you dequeue more > than one at once and form multiple pkt pointers pointing into different > xdp_mds). I cannot quite grasp the issue. Sounds orthogonal. The pkt pointers are not ptr_to_btf_id like. There is no PROBE_MEM there. > > struct nf_conn *ct = bpf_xdp_ct_lookup(...); > > struct nf_conn *master = ct->master; > > bpf_ct_release(master); > > definitely has to be prevented, since it will cause a crash. > > > > As a follow up to this set would be great to allow ptr_to_btf_id > > pointers persist longer than program execution. > > Users already asked to allow the following: > > map_value = bpf_map_lookup_elem(...); > > struct nf_conn *ct = bpf_xdp_ct_lookup(...); > > map_value->saved_ct = ct; > > and some time later in a different or the same program: > > map_value = bpf_map_lookup_elem(...); > > bpf_ct_release(map_value->saved_ct); > > > > Currently folks work around this deficiency by storing some > > sort of id and doing extra lookups while performance is suffering. > > wdyt? > > Very interesting idea! I'm guessing we'll need something akin to bpf_timer > support, i.e. a dedicated type verified using BTF which can be embedded in > map_value? I'll be happy to work on enabling this. Thanks! Would be awesome. > One thought though (just confirming): > If user does map_value->saved_ct = ct, we have to ignore reference leak check > for ct's ref_id, but if they rewrite saved_ct, we would also have to unignore > it, correct? We cannot just ignore it :) I was thinking to borrow std::unique_ptr like semanitcs. struct nf_conn *ct = bpf_xdp_ct_lookup(...); // here ref checking logic tracks it as normal map_value->saved_ct = ct; // here it trasnfers the ref from Rx into map_value ct->status; // cannot be access here. It could look unnatural to typical C programmer, so we might need explicit std::move-like helper, so the assignment will be: bpf_move_ptr(&map_value->saved_ct, &ct); // same as map_value->saved_ct = ct; ct = NULL; ... bpf_move_ptr(&ct, &map_value->saved_ct); // would take the ownership back from the map // and the ref checking logic tracks 'ct' again as normal > I think we can make this tracking easier by limiting to one bpf_ptr_to_btf > struct in map_value, then it can simply be part of ptr_to_map_value's reg_state. Possible. Hopefully such limitiation will not be needed.