On 10/28/21 2:24 PM, Arnaldo Carvalho de Melo wrote:
Em Thu, Oct 28, 2021 at 01:27:10PM +0100, Douglas RAILLARD escreveu:
From: Douglas Raillard <douglas.raillard@xxxxxxx>
When the alignment is larger than natural, it is very likely that the
source code was using the cacheline size. Therefore, use the cacheline
size when it would only result in increasing the alignment.
--- /tmp/btfdiff.dwarf.pXdgRU 2021-10-28 10:22:11.738200232 -0300
+++ /tmp/btfdiff.btf.bkDkdf 2021-10-28 10:22:11.925205061 -0300
@@ -107,7 +107,7 @@ struct Qdisc {
/* XXX 24 bytes hole, try to pack */
/* --- cacheline 2 boundary (128 bytes) --- */
- struct sk_buff_head gso_skb __attribute__((__aligned__(64))); /* 128 24 */
+ struct sk_buff_head gso_skb __attribute__((__aligned__(32))); /* 128 24 */
struct qdisc_skb_head q; /* 152 24 */
struct gnet_stats_basic_packed bstats; /* 176 16 */
/* --- cacheline 3 boundary (192 bytes) --- */
This one is gone with heuristic, thanks for accepting my suggestion and
coding it this fast!
Thanks for the quick review !
From my manual checks, I could basically not find any __aligned__(32) anymore
after this patch as expected. The main difference with dwarf left are missing
__aligned__ when the member was already aligned, but it should be harmless and
there is nothing we can do about that unless BTF is augmented.
Applied.
I'm pushing it out to the 'next' branch, please work from there, I'll
move it to 'master' when it passes libbpf's CI tests.
noted, I rebased on "next", sorry for the repeated patches.
- Douglas
- Arnaldo
Signed-off-by: Douglas Raillard <douglas.raillard@xxxxxxx>
---
btf_loader.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/btf_loader.c b/btf_loader.c
index e500eae..7a5b16f 100644
--- a/btf_loader.c
+++ b/btf_loader.c
@@ -476,6 +476,7 @@ static uint32_t class__infer_alignment(const struct conf_load *conf,
uint32_t natural_alignment,
uint32_t smallest_offset)
{
+ uint16_t cacheline_size = conf->conf_fprintf->cacheline_size;
uint32_t alignment = 0;
uint32_t offset_delta = byte_offset - smallest_offset;
@@ -494,6 +495,15 @@ static uint32_t class__infer_alignment(const struct conf_load *conf,
/* Natural alignment, nothing to do */
if (alignment <= natural_alignment || alignment == 1)
alignment = 0;
+ /* If the offset is compatible with being aligned on the cacheline size
+ * and this would only result in increasing the alignment, use the
+ * cacheline size as it is safe and quite likely to be what was in the
+ * source.
+ */
+ else if (alignment < cacheline_size &&
+ cacheline_size % alignment == 0 &&
+ byte_offset % cacheline_size == 0)
+ alignment = cacheline_size;
return alignment;
}
--
2.25.1