Hi, On 08/14/20 at 04:21pm, Sang Yan wrote: > > > On 08/14/20 14:58, Dave Young wrote: > > On 08/14/20 at 01:52am, Sang Yan wrote: > >> In normal kexec, relocating kernel may cost 5 ~ 10 seconds, to > >> copy all segments from vmalloced memory to kernel boot memory, > >> because of disabled mmu. > > > > It is not the case on all archs, I assume your case is arm64, please > > describe it in patch log :) > > > Yes, it's particularly obvious on arm64. I will add it to the patch log, > and test how long it takes on x86 and other arch. > > > About the arm64 problem, I know Pavel Tatashin is working on a patchset > > to improve the performance with enabling mmu. > > > > I added Pavel in cc, can you try his patches? > > > Thanks for your tips, I will try these patches. @Pavel. > Disable mmu after finishing copying pages? > >> > >> We introduce quick kexec to save time of copying memory as above, > >> just like kdump(kexec on crash), by using reserved memory > >> "Quick Kexec". > > > > This approach may have gain, but it also introduce extra requirements to > > pre-reserve a memory region. I wonder how Eric thinks about the idea. > > > > Anyway the "quick" name sounds not very good, I would suggest do not > > introduce a new param, and the code can check if pre-reserved region > > exist then use it, if not then fallback to old way. > > > aha. I agree with it, but I thought it may change the old behaviors of > kexec_load. > > I will update a new patch without introducing new flags and new params. Frankly I'm still not sure it is worth to introduce a new interface if the improvement can be done in arch code like Pavel is doing. Can you try that first? Thanks Dave _______________________________________________ kexec mailing list kexec@xxxxxxxxxxxxxxxxxxx http://lists.infradead.org/mailman/listinfo/kexec