On Thu, Feb 11, 2021 at 1:26 PM Ilya Leoshkevich <iii@xxxxxxxxxxxxx> wrote: > > On Wed, 2021-02-10 at 16:19 -0800, Andrii Nakryiko wrote: > > On Tue, Feb 9, 2021 at 7:03 PM Ilya Leoshkevich <iii@xxxxxxxxxxxxx> > > wrote: > > > > > > Add a new kind value, expand the kind bitfield, add a macro for > > > parsing the additional u32. > > > > > > Signed-off-by: Ilya Leoshkevich <iii@xxxxxxxxxxxxx> > > > --- > > > include/uapi/linux/btf.h | 10 ++++++++-- > > > tools/include/uapi/linux/btf.h | 10 ++++++++-- > > > 2 files changed, 16 insertions(+), 4 deletions(-) > > > > > > diff --git a/include/uapi/linux/btf.h b/include/uapi/linux/btf.h > > > index 5a667107ad2c..e713430cb033 100644 > > > --- a/include/uapi/linux/btf.h > > > +++ b/include/uapi/linux/btf.h > > > @@ -52,7 +52,7 @@ struct btf_type { > > > }; > > > }; > > > > > > -#define BTF_INFO_KIND(info) (((info) >> 24) & 0x0f) > > > +#define BTF_INFO_KIND(info) (((info) >> 24) & 0x1f) > > > #define BTF_INFO_VLEN(info) ((info) & 0xffff) > > > #define BTF_INFO_KFLAG(info) ((info) >> 31) > > > > > > @@ -72,7 +72,8 @@ struct btf_type { > > > #define BTF_KIND_FUNC_PROTO 13 /* Function Proto */ > > > #define BTF_KIND_VAR 14 /* Variable */ > > > #define BTF_KIND_DATASEC 15 /* Section */ > > > -#define BTF_KIND_MAX BTF_KIND_DATASEC > > > +#define BTF_KIND_FLOAT 16 /* Floating point */ > > > +#define BTF_KIND_MAX BTF_KIND_FLOAT > > > #define NR_BTF_KINDS (BTF_KIND_MAX + 1) > > > > > > /* For some specific BTF_KIND, "struct btf_type" is immediately > > > @@ -169,4 +170,9 @@ struct btf_var_secinfo { > > > __u32 size; > > > }; > > > > > > +/* BTF_KIND_FLOAT is followed by a u32 and the following > > > > > > what's the point of that u32, if BTF_FLOAT_BITS() is just t->size * > > 8? > > Why adding this complexity. BTF_KIND_INT has bits because we had an > > inconvenient bitfield encoding as a special BTF_KIND_INT types, which > > we since stopped using in favor of encoding bitfield sizes and > > offsets > > inside struct/union fields. I don't think there is any need for that > > with FLOAT, so why waste space and add complexity and possibility for > > inconsistencies? > > You are right, this is not necessary. I don't think something like a > floating-point bitfield exists in the first place. > > > Disclaimer: I'm in a "just BTF_KIND_INT encoding bit for > > floating-point numbers" camp. > > Despite me being the guy, who sent this series, I like such a simpler > approach as well. In fact, my first attempt at this was even simpler - > just a char[] - but this didn't let us distinguish floats from "real" > byte arrays, which BTF_KIND_INT encoding does. But I think we need to > convince Alexey that this would be OK? :-) If that helps, I can > implement the BTF_KIND_INT encoding variant, so that we could compare > both approaches. What do you think? Sorry to crash your party :) I'm very much against calling "float" a different kind of "int". BTF is not equivalent to dwarf. It's not a traditional debug format which purpose is to describe the types, line info, etc. BTF is used in verification and its correctness is crucial. Imagine a function with KIND_INT argument if there is no KIND_FLOAT btf_func_model construction will silently consume 'int' which is actually 'float' which will cause kernel crash. There is a wip on "unstable helpers". It needs accurate argument processing too. The same thing will apply there. If KIND_FLOAT is mistaken for KIND_INT the kernel will crash again, but in a different way. The usage of floating point in the kernel is minimal, but the point stands.