Re: [PATCH ipsec-next v1 6/7] bpf: selftests: test_tunnel: Disable CO-RE relocations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 11/26/23 3:14 PM, Eduard Zingerman wrote:
On Sat, 2023-11-25 at 20:22 -0800, Yonghong Song wrote:
[...]
--- a/tools/testing/selftests/bpf/progs/test_tunnel_kern.c
+++ b/tools/testing/selftests/bpf/progs/test_tunnel_kern.c
@@ -6,7 +6,10 @@
    * modify it under the terms of version 2 of the GNU General Public
    * License as published by the Free Software Foundation.
    */
-#define BPF_NO_PRESERVE_ACCESS_INDEX
+#if __has_attribute(preserve_static_offset)
+struct __attribute__((preserve_static_offset)) erspan_md2;
+struct __attribute__((preserve_static_offset)) erspan_metadata;
+#endif
   #include "vmlinux.h"
[...]
   int bpf_skb_get_fou_encap(struct __sk_buff *skb_ctx,
@@ -174,9 +177,13 @@ int erspan_set_tunnel(struct __sk_buff *skb)
          __u8 hwid = 7;
md.version = 2;
+#if __has_attribute(preserve_static_offset)
          md.u.md2.dir = direction;
          md.u.md2.hwid = hwid & 0xf;
          md.u.md2.hwid_upper = (hwid >> 4) & 0x3;
+#else
+       /* Change bit-field store to byte(s)-level stores. */
+#endif
   #endif
ret = bpf_skb_set_tunnel_opt(skb, &md, sizeof(md));

====

Eduard, could you double check whether this is a valid use case
to solve this kind of issue with preserve_static_offset attribute?
Tbh I'm not sure. This test passes with preserve_static_offset
because it suppresses preserve_access_index. In general clang
translates bitfield access to a set of IR statements like:

   C:
     struct foo {
       unsigned _;
       unsigned a:1;
       ...
     };
     ... foo->a ...

   IR:
     %a = getelementptr inbounds %struct.foo, ptr %0, i32 0, i32 1
     %bf.load = load i8, ptr %a, align 4
     %bf.clear = and i8 %bf.load, 1
     %bf.cast = zext i8 %bf.clear to i32

With preserve_static_offset the getelementptr+load are replaced by a
single statement which is preserved as-is till code generation,
thus load with align 4 is preserved.

On the other hand, I'm not sure that clang guarantees that load or
stores used for bitfield access would be always aligned according to
verifier expectations.

I think it should be true. The frontend does alignment analysis based on
types and (packed vs. unpacked) and assign each load/store with proper
alignment (like 'align 4' in the above). 'align 4' truely means
the load itself is 4-byte aligned. Otherwise, it will be very confusing
for arch's which do not support unaligned memory access (e.g. BPF).


I think we should check if there are some clang knobs that prevent
generation of unaligned memory access. I'll take a look.






[Index of Archives]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]

  Powered by Linux