Hi Alban, On Mon, Feb 13, 2017 at 09:38:08PM +0100, Alban wrote: > From: Alban Bedel <albeu@xxxxxxx> > > Compressed images (vmlinuz.bin) have to be loaded at a specific > address that differ from the address normaly used for vmlinux.bin. > This is because the decompressor just write its output at the address > vmlinux.bin should be loaded at, and it shouldn't overwrite itself. > This limitation mean that the bootloader must be configured differently > when loading a vmlinux.bin or a vmlinuz.bin image, this is annoying > and a source of error. > > To workaround this we extend the compressed loader to cope with being > loaded at (nearly) any address. During the early init a jump is used > to compute the offset between the current address and the linked > address, if they differ the whole image is first copied to the linked > address before proceeding. > > Some load address won't work, for example if there is an overlap with > the range where vmlinuz.bin should be loaded. However for the typical > case of using the vmlinux.bin address that won't be the case. > > Signed-off-by: Alban Bedel <albeu@xxxxxxx> > Suggested-by: Jonas Gorski <jonas.gorski@xxxxxxxxx> > --- > Changelog: > v2: * Rework the code as suggested by Jonas Gorski to autodetect the > load address and remove the need for a Kconfig option. > --- > arch/mips/boot/compressed/head.S | 23 +++++++++++++++++++++++ > 1 file changed, 23 insertions(+) > > diff --git a/arch/mips/boot/compressed/head.S b/arch/mips/boot/compressed/head.S > index 409cb48..3c25a96 100644 > --- a/arch/mips/boot/compressed/head.S > +++ b/arch/mips/boot/compressed/head.S > @@ -25,6 +25,29 @@ start: > move s2, a2 > move s3, a3 > > + /* Get the offset between the current address and linked address */ > + PTR_LA t0, reloc_label > + bal reloc_label > + nop > +reloc_label: > + subu t0, ra, t0 > + > + /* If there is no offset no reloc is needed */ > + beqz t0, clear_bss > + nop > + > + /* Move the text, data section and DTB to the correct address */ > + PTR_LA a0, .text > + addu a1, t0, a0 > + PTR_LA a2, _edata > +copy_vmlinuz: > + lw a3, 0(a1) > + sw a3, 0(a0) > + addiu a0, a0, 4 > + bne a2, a0, copy_vmlinuz > + addiu a1, a1, 4 Does this need to sync the icache and resolve the instruction hazard before jumping into the newly written code? E.g. on mips32/64 r2 and later you could I think "synci" at SYNCI_Step intervals (as determined by RDHWR instruction), followed by a "sync" and then using "jr.hb" instead of "jr" to clear the instruction hazard while jumping to the newly written code. That is roughly what arch/mips/kernel/relocate.c and arch/mips/kernel/head.S do, but as mentioned that assumes MIPS32/64 r2+, and at least 2 platforms selecting SYS_SUPPORTS_ZBOOT* also select CPU_HAS_CPU_MIPS32_R1. Cheers James > + > +clear_bss: > /* Clear BSS */ > PTR_LA a0, _edata > PTR_LA a2, _end > -- > 2.7.4 > >
Attachment:
signature.asc
Description: Digital signature