On Thu, May 07, 2009 at 03:26:49PM -0700, H. Peter Anvin wrote: > From: H. Peter Anvin <hpa at zytor.com> > > Aligning the .bss section makes it trivially faster, and makes using > larger transfers for the clear slightly easier. > > [ Impact: trivial performance enhancement, future patch prep ] > > Signed-off-by: H. Peter Anvin <hpa at zytor.com> > --- > arch/x86/boot/compressed/vmlinux.lds.S | 1 + > 1 files changed, 1 insertions(+), 0 deletions(-) > > diff --git a/arch/x86/boot/compressed/vmlinux.lds.S b/arch/x86/boot/compressed/vmlinux.lds.S > index 0d26c92..27c168d 100644 > --- a/arch/x86/boot/compressed/vmlinux.lds.S > +++ b/arch/x86/boot/compressed/vmlinux.lds.S > @@ -42,6 +42,7 @@ SECTIONS > *(.data.*) > _edata = . ; > } > + . = ALIGN(32); Where does this magic 32 comes from? I would assume the better choice would be: . = ALIGN(L1_CACHE_BYTES); So we match the relevant CPU. In general for alignmnet of output sections I see the need for: 1) Function call 2) L1_CACHE_BYTES 3) PAGE_SIZE 4) 2*PAGE_SIZE But I see magic constant used here and there that does not match the above (when looking at all archs). So I act when I see a new 'magic' number.. Sam