Re: [RFC PATCH 1/1] Fix __kcrctab+* sections alignment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Masahiro,

On 8/28/22 15:59, Masahiro Yamada wrote:
On Thu, Aug 25, 2022 at 9:21 PM Yann Sionneau <ysionneau@xxxxxxxxx> wrote:
Hello Ard,

On 25/08/2022 14:12, Ard Biesheuvel wrote:
On Thu, 25 Aug 2022 at 14:10, Yann Sionneau <ysionneau@xxxxxxxxx> wrote:
Forwarding also the actual patch to linux-kbuild and linux-arch

-------- Forwarded Message --------
Subject:        [RFC PATCH 1/1] Fix __kcrctab+* sections alignment
Date:   Wed, 17 Aug 2022 18:14:38 +0200
From:   Yann Sionneau <ysionneau@xxxxxxxxx>
To:     linux-kernel@xxxxxxxxxxxxxxx
CC:     Nicolas Schier <nicolas@xxxxxxxxx>, Masahiro Yamada
<masahiroy@xxxxxxxxxx>, Jules Maselbas <jmaselbas@xxxxxxxxx>, Julian
Vetter <jvetter@xxxxxxxxx>, Yann Sionneau <ysionneau@xxxxxxxxx>



What happened to the commit log?
This is a forward of this thread: https://lkml.org/lkml/2022/8/17/868

Either I did something wrong with my email agent or maybe the email
containing the cover letter is taking some time to reach you?

Signed-off-by: Yann Sionneau <ysionneau@xxxxxxxxx>
---
include/linux/export-internal.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/export-internal.h
b/include/linux/export-internal.h
index c2b1d4fd5987..d86bfbd7fa6d 100644
--- a/include/linux/export-internal.h
+++ b/include/linux/export-internal.h
@@ -12,6 +12,6 @@
/* __used is needed to keep __crc_* for LTO */
#define SYMBOL_CRC(sym, crc, sec) \
- u32 __section("___kcrctab" sec "+" #sym) __used __crc_##sym = crc
+ u32 __section("___kcrctab" sec "+" #sym) __used __aligned(4)
__aligned(4) is the default for u32 so this should not be needed.
Well, I am not completely sure about that. See my cover letter, previous
mechanism for symbol CRC was actually enforcing the section alignment to
4 bytes boundary as well.

Also, I'm not sure it is forbidden for an architecture/compiler
implementation to actually enforce a stronger alignment on u32, which in
theory would not break anything.

But in this precise case it does break something since it will cause
"gaps" in the end result vmlinux binary segment. For this to work I
think we really want to enforce a 4 bytes alignment on the section.


Please teach me a bit more about the kvm compiler.



How does it get access to an array of u32?

u32 foo[2] = { 1, 2 };


Does the compiler insert padding between each element
so that both of &foo[0] and &foo[1] are 8-byte aligned?

Or, no padding, and the address of &foo[1] is  8 n + 4?

Here is what's happening: https://godbolt.org/z/Yz74W7jGY

unsignedintfoo[2] = { 1, 2};
intget_foo_sum(unsignedint*foo) {
returnfoo[0] + foo[1];
}
gets compiled to:
get_foo_sum:
lwz$r1=0[$r0]
;; # (end cycle 0)
lwz$r0=4[$r0]
;; # (end cycle 1)
addw$r0=$r1, $r0
ret
;; # (end cycle 4)
foo:
.long1
.long2
So it seems that no padding is inserted. Which looks sane ^^







[Index of Archives]     [Linux&nblp;USB Development]     [Linux Media]     [Video for Linux]     [Linux Audio Users]     [Yosemite Secrets]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux