On Tue, May 19, 2020 at 02:49:57PM +0200, Lukasz Stelmach wrote: > It was <2020-05-19 wto 13:27>, when Russell King - ARM Linux admin wrote: > > On Tue, May 19, 2020 at 02:20:25PM +0200, Lukasz Stelmach wrote: > >> It was <2020-05-19 wto 12:43>, when Russell King - ARM Linux admin wrote: > >>> On Tue, May 19, 2020 at 01:21:09PM +0200, Geert Uytterhoeven wrote: > >>>> On Tue, May 19, 2020 at 11:46 AM Russell King - ARM Linux admin > >>>> <linux@xxxxxxxxxxxxxxx> wrote: > >>>>> On Tue, May 19, 2020 at 11:44:17AM +0200, Geert Uytterhoeven wrote: > >>>>>> On Tue, May 19, 2020 at 10:54 AM Lukasz Stelmach <l.stelmach@xxxxxxxxxxx> wrote: > >>>>>>> It was <2020-04-29 śro 10:21>, when Geert Uytterhoeven wrote: > >>>>>>>> Currently, the start address of physical memory is obtained by masking > >>>>>>>> the program counter with a fixed mask of 0xf8000000. This mask value > >>>>>>>> was chosen as a balance between the requirements of different platforms. > >>>>>>>> However, this does require that the start address of physical memory is > >>>>>>>> a multiple of 128 MiB, precluding booting Linux on platforms where this > >>>>>>>> requirement is not fulfilled. > >>>>>>>> > >>>>>>>> Fix this limitation by obtaining the start address from the DTB instead, > >>>>>>>> if available (either explicitly passed, or appended to the kernel). > >>>>>>>> Fall back to the traditional method when needed. > [...] > >>>>>>> Apparently reading physical memory layout from DTB breaks crashdump > >>>>>>> kernels. A crashdump kernel is loaded into a region of memory, that is > >>>>>>> reserved in the original (i.e. to be crashed) kernel. The reserved > >>>>>>> region is large enough for the crashdump kernel to run completely inside > >>>>>>> it and don't modify anything outside it, just read and dump the remains > >>>>>>> of the crashed kernel. Using the information from DTB makes the > >>>>>>> decompressor place the kernel outside of the dedicated region. > >>>>>>> > >>>>>>> The log below shows that a zImage and DTB are loaded at 0x18eb8000 and > >>>>>>> 0x193f6000 (physical). The kernel is expected to run at 0x18008000, but > >>>>>>> it is decompressed to 0x00008000 (see r4 reported before jumping from > >>>>>>> within __enter_kernel). If I were to suggest something, there need to be > >>>>>>> one more bit of information passed in the DTB telling the decompressor > >>>>>>> to use the old masking technique to determain kernel address. It would > >>>>>>> be set in the DTB loaded along with the crashdump kernel. > [...] > >>>>>> Describing "to use the old masking technique" sounds a bit hackish to me. > >>>>>> I guess it cannot just restrict the /memory node to the reserved region, > >>>>>> as the crashkernel needs to be able to dump the remains of the crashed > >>>>>> kernel, which lie outside this region. > >>>>> > >>>>> Correct. > >>>>> > >>>>>> However, something under /chosen should work. > >>>>> > >>>>> Yet another sticky plaster... > >>>> > >>>> IMHO the old masking technique is the hacky solution covered by > >>>> plasters. > >>> > >>> One line of code is not "covered by plasters". There are no plasters. > >>> It's a solution that works for 99.99% of people, unlike your approach > >>> that has had a stream of issues over the last four months, and has > >>> required many reworks of the code to fix each one. That in itself > >>> speaks volumes about the suitability of the approach. > >> > >> As I have been working with kexec code (patches soon) I would like to > >> defend the DT approach a bit. It allows to avoid zImage relocation when > >> a decompressed kernel is larger than ~128MiB. In such case zImage isn't > >> small either and moving it around takes some time. > > > > ... which is something that has been supported for a very long time, > > before the days of DT. > > How? If a decompressed kernel requires >128M and a bootloader would like > to put a zImage high enough to *avoid* copying it once again, then the > decompressor can't see any memory below the 128M window it starts in and > can't decompress the kernel there. Do you have such a large kernel? It would be rather inefficient as branch instructions could not be used; every function call would have to be indirect. The maximum is +/- 32MB for a branch. > If we do not care about copying > zImage, then, indeed, everything works fine as it is today. You are > most probably right 99% doesn't require 128M kernel, but the case is > IMHO obvious enough, that it should be adressed somehow. If I have a kernel in excess of 4GB... "it should be addressed somehow"! -- RMK's Patch system: https://www.armlinux.org.uk/developer/patches/ FTTC for 0.8m (est. 1762m) line in suburbia: sync at 13.1Mbps down 424kbps up