Hello Hatayama-san, On Fri, 30 Mar 2012 09:51:43 +0900 ( ) HATAYAMA Daisuke <d.hatayama at jp.fujitsu.com> wrote: > For the processing of writing pages per range of memory, it's useful > to reuse the code for --split's splitting features that split a single > dumpfile into a multiple dumpfiles, which has prepared data strucutre > to have start and end page frame numbers of the corresponding dumped > memory. For example, see part below in write_kdump_pages(). > > if (info->flag_split) { > start_pfn = info->split_start_pfn; > end_pfn = info->split_end_pfn; > } > else { > start_pfn = 0; > end_pfn = info->max_mapnr; > } > > for (pfn = start_pfn; pfn < end_pfn; pfn++) { > > For the processing of creating and referencing bitmaps per range of > memory, there's no functions that do that. The ones for a whole memory > only: create_bitmap() and is_dumpable(). Also, creating bitmap depends > on source dumpfile format. Trying with ELF to kdump-compressed format > case first seems most handy; or if usecase is on the 2nd kernel only, > this case is enough?) > > For performance impact, I don't know that exactly. But I guess > iterating filtering processing is most significant. I don't know exact > data structure for each kind of memory, but if there's the ones > needing linear order to look up the data for a given page frame > number, there would be necessary to add some special handling not to > reduce performance. Thank you for your idea. I think this is an important issue and I have no idea except iterating filtering processes for each memory range. But as you said, we should consider the issue related to performance. For example, makedumpfile must parse free_list repeatedly to distinguish whether each pfn is a free page or not, because each range may be inside the same zone. It will be overhead. Thanks Atsushi Kumagai