Andrew Morton <akpm at linux-foundation.org> writes: > On Sat, 16 Mar 2013 13:02:29 +0900 HATAYAMA Daisuke <d.hatayama at jp.fujitsu.com> wrote: > >> If there's some vmcore object that doesn't satisfy page-size boundary >> requirement, remap_pfn_range() fails to remap it to user-space. >> >> Objects that posisbly don't satisfy the requirement are ELF note >> segments only. The memory chunks corresponding to PT_LOAD entries are >> guaranteed to satisfy page-size boundary requirement by the copy from >> old memory to buffer in 2nd kernel done in later patch. >> >> This patch doesn't copy each note segment into the 2nd kernel since >> they amount to so large in total if there are multiple CPUs. For >> example, current maximum number of CPUs in x86_64 is 5120, where note >> segments exceed 1MB with NT_PRSTATUS only. > > I don't really understand this. Why does the number of or size of > note segments affect their alignment? > >> --- a/fs/proc/vmcore.c >> +++ b/fs/proc/vmcore.c >> @@ -38,6 +38,8 @@ static u64 vmcore_size; >> >> static struct proc_dir_entry *proc_vmcore = NULL; >> >> +static bool support_mmap_vmcore; > > This is quite regrettable. It means that on some kernels/machines, > mmap(vmcore) simply won't work. This means that people might write > code which works for them, but which will fail for others when deployed > on a small number of machines. > > Can we avoid this? Why can't we just copy the notes even if there are > a large number of them? Yes. If it simplifies things I don't see a need to support mmapping everything. But even there I don't see much of an issue. Today we allocate a buffer to hold the ELF header program headers and the note segment, and we could easily allocate that buffer in such a way to make it mmapable. Eric