Ilya Leoshkevich wrote: > On Wed, 2021-02-17 at 13:12 -0800, John Fastabend wrote: > > John Fastabend wrote: > > > Ilya Leoshkevich wrote: > > > > The logic follows that of BTF_KIND_INT most of the time. > > > > Sanitization > > > > replaces BTF_KIND_FLOATs with equally-sized BTF_KIND_INTs on > > > > older > > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > > Does this match the code though? > > > > > > > kernels. > > > > > > > > Signed-off-by: Ilya Leoshkevich <iii@xxxxxxxxxxxxx> > > > > --- > > > > > > [...] > > > > > > > > > > @@ -2445,6 +2450,9 @@ static void bpf_object__sanitize_btf(struct > > > > bpf_object *obj, struct btf *btf) > > > > } else if (!has_func_global && btf_is_func(t)) { > > > > /* replace BTF_FUNC_GLOBAL with > > > > BTF_FUNC_STATIC */ > > > > t->info = BTF_INFO_ENC(BTF_KIND_FUNC, 0, > > > > 0); > > > > + } else if (!has_float && btf_is_float(t)) { > > > > + /* replace FLOAT with INT */ > > > > + t->info = BTF_INFO_ENC(BTF_KIND_FLOAT, 0, > > > > 0); > > > > > > Do we also need to encode the vlen here? > > > > Sorry typo on my side, 't->size = ?' is what I was trying to point > > out. > > Looks like its set in the other case where we replace VAR with INT. > > The idea is to have the size of the INT equal to the size of the FLOAT > that it replaces. I guess we can't do the same for VARs, because they > don't have the size field, and if we don't have DATASECs, then we can't > find the size of a VAR at all. > Right, but KINT_INT has some extra constraints that don't appear to be in place for KIND_FLOAT. For example meta_check includes max size check. We should check these when libbpf does conversion as well? Otherwise kernel is going to give us an error that will be a bit hard to understand. Also what I am I missing here. I use the writers to build a float, btf__add_float(btf, "new_float", 8); This will create the btf_type struct approximately like this, btf_type t { .name = name_off; // points at my name .info = btf_type_info(BTF_KIND_FLOAT, 0, 0); .size = 8 }; But if I create an int_type with btf__add_int(btf, "net_int", 8); I will get a btf_type + __u32. When we do the conversion how do we skip the extra u32 setup? *(__u32 *)(t + 1) = (encoding << 24) | (byte_sz * 8); Should we set this up on the conversion as well? Otherwise later steps might try to read the __u32 piece to find some arbitrary memory? .John