Hi Catalin, On Tue, Oct 6, 2020 at 11:30 PM Catalin Marinas <catalin.marinas@xxxxxxx> wrote: > > On Mon, Oct 05, 2020 at 11:12:10PM +0530, Bhupesh Sharma wrote: > > I think my earlier email with the test results on this series bounced > > off the mailing list server (for some weird reason), but I still see > > several issues with this patchset. I will add specific issues in the > > review comments for each patch again, but overall, with a crashkernel > > size of say 786M, I see the following issue: > > > > # cat /proc/cmdline > > BOOT_IMAGE=(hd7,gpt2)/vmlinuz-5.9.0-rc7+ root=<..snip..> rd.lvm.lv=<..snip..> crashkernel=786M > > > > I see two regions of size 786M and 256M reserved in low and high > > regions respectively, So we reserve a total of 1042M of memory, which > > is an incorrect behaviour: > > > > # dmesg | grep -i crash > > [ 0.000000] Reserving 256MB of low memory at 2816MB for crashkernel (System low RAM: 768MB) > > [ 0.000000] Reserving 786MB of memory at 654158MB for crashkernel (System RAM: 130816MB) > > [ 0.000000] Kernel command line: BOOT_IMAGE=(hd2,gpt2)/vmlinuz-5.9.0-rc7+ root=/dev/mapper/rhel_ampere--hr330a--03-root ro rd.lvm.lv=rhel_ampere-hr330a-03/root rd.lvm.lv=rhel_ampere-hr330a-03/swap crashkernel=786M cma=1024M > > > > # cat /proc/iomem | grep -i crash > > b0000000-bfffffff : Crash kernel (low) > > bfcbe00000-bffcffffff : Crash kernel > > As Chen said, that's the intended behaviour and how x86 works. The > requested 768M goes in the high range if there's not enough low memory > and an additional buffer for swiotlb is allocated, hence the low 256M. I understand, but why 256M (as low) for arm64? x86_64 setups usually have more system memory available as compared to several commercially available arm64 setups. So is the intent, just to keep the behavior similar between arm64 and x86_64? Should we have a CONFIG option / bootarg to help one select the max 'low_size'? Currently the ' low_size' value is calculated as: /* * two parts from kernel/dma/swiotlb.c: * -swiotlb size: user-specified with swiotlb= or default. * * -swiotlb overflow buffer: now hardcoded to 32k. We round it * to 8M for other buffers that may need to stay low too. Also * make sure we allocate enough extra low memory so that we * don't run out of DMA buffers for 32-bit devices. */ low_size = max(swiotlb_size_or_default() + (8UL << 20), 256UL << 20); Since many arm64 boards ship with swiotlb=0 (turned off) via kernel bootargs, the low_size, still ends up being 256M in such cases, whereas this 256M can be used for some other purposes - so should we be limiting this to 64M and failing the crash kernel allocation request (gracefully) otherwise? > We could (as an additional patch), subtract the 256M from the high > allocation so that you'd get a low 256M and a high 512M, not sure it's > worth it. Note that with a "crashkernel=768M,high" option, you still get > the additional low 256M, otherwise the crashkernel won't be able to > boot as there's no memory in ZONE_DMA. In the explicit ",high" request > case, I'm not sure subtracted the 256M is more intuitive. > In 5.11, we also hope to fix the ZONE_DMA layout for non-RPi4 platforms > to cover the entire 32-bit address space (i.e. identical to the current > ZONE_DMA32). > > > IMO, we should test this feature more before including this in 5.11 > > Definitely. That's one of the reasons we haven't queued it yet. So any > help with testing here is appreciated. Sure, I am running more checks on this series. I will be soon back with more updates. Regards, Bhupesh