HATAYAMA Daisuke <d.hatayama at jp.fujitsu.com> writes: > Currently, read to /proc/vmcore is done by read_oldmem() that uses > ioremap/iounmap per a single page. For example, if memory is 1GB, > ioremap/iounmap is called (1GB / 4KB)-times, that is, 262144 > times. This causes big performance degradation. > > In particular, the current main user of this mmap() is makedumpfile, > which not only reads memory from /proc/vmcore but also does other > processing like filtering, compression and IO work. Update of page > table and the following TLB flush makes such processing much slow; > though I have yet to make patch for makedumpfile and yet to confirm > how it's improved. > > To address the issue, this patch implements mmap() on /proc/vmcore to > improve read performance. My simple benchmark shows the improvement > from 200 [MiB/sec] to over 50.0 [GiB/sec]. I am in favor of this direction and the performance and other gains look good. I am not in favor of the ABI changes nor of the nearly order of magnitude memory usage increase for elf notes by rounding everything up to a page size boundary. As a general note it is possible to support mmaping any partial page by just rounding inside of your mmap function so you should not need to copy partial pages. If you don't want the memory overhead of merging the ELF notes in memory in the second kernel you can simply require that the ELF header, the ELF program header, and the PT_NOTE section be read from /proc/vmcore instead of mmaped. I did the math and with your changes to note generation in the worst case you are reserving 20MiB in the first kernel to replace a 1.6MiB with a 240KiB allocation in the second kernel. That is the wrong tradeoff, especially when you require an ABI change at the same time, and the 5120+ entries in vmcore_list will likely measurably slow down setting up your mappings with mmap. Eric