On Wed, Jul 06, 2022 at 11:18:22PM +0800, guanghui.fgh wrote: > 在 2022/7/6 21:54, Mike Rapoport 写道: > > One thing I can think of is to only remap the crash kernel memory if it is > > a part of an allocation that exactly fits into one ore more PUDs. > > > > Say, in reserve_crashkernel() we try the memblock_phys_alloc() with > > PUD_SIZE as alignment and size rounded up to PUD_SIZE. If this allocation > > succeeds, we remap the entire area that now contains only memory allocated > > in reserve_crashkernel() and free the extra memory after remapping is done. > > If the large allocation fails, we fall back to the original size and > > alignment and don't allow unmapping crash kernel memory in > > arch_kexec_protect_crashkres(). > > There is a new method. > I think we should use the patch v3(similar but need add some changes) > > 1.We can walk crashkernle block/section pagetable, > [[[(keep the origin block/section mapping valid]]] > rebuild the pte level page mapping for the crashkernel mem > rebuild left & right margin mem(which is in same block/section mapping but > out of crashkernel mem) with block/section mapping > > 2.'replace' the origin block/section mapping by new builded mapping > iterately > > With this method, all the mem mapping keep valid all the time. As I already commented on one of your previous patches, this is not allowed by the architecture. If FEAT_BBM is implemented (ARMv8.4 I think), the worst that can happen is a TLB conflict abort and the handler should invalidate the TLBs and restart the faulting instruction, assuming the handler won't try to access the same conflicting virtual address. Prior to FEAT_BBM, that's not possible as the architecture does not describe a precise behaviour of conflicting TLB entries (you might as well get the TLB output of multiple entries being or'ed together). -- Catalin