On 09/14/2020 05:15 PM, HAGIO KAZUHITO(萩尾 一仁) wrote: > -----Original Message----- >> On 09/14/2020 04:15 PM, HAGIO KAZUHITO(萩尾 一仁) wrote: >>> -----Original Message----- >>>> On 09/11/2020 04:53 PM, HAGIO KAZUHITO(萩尾 一仁) wrote: >>>>> Hi Pingfan, >>>>> >>>>> -----Original Message----- >>>>>> Hello, >>>>>> >>>>>> There is an appeal which only wants to save some user page including env >>>>>> and args pages, and discards the other user space pages. >>>>> >>>>> I understand that it's helpful to get them even with -d 31 for crash's >>>>> "ps -a" option.. >>>>> >>>>>> >>>>>> To achieve this feature, mm_struct's members "arg_start, arg_end, >>>>>> env_start, env_end;" should be accessed. So we need to export mm_struct >>>>>> and init_mm through vmcore. >>>>> >>>>> How many offsets/sizes will be required to walk all tasks? >>>> At present, I think only the info "arg_start, arg_end, env_start, >>>> env_end" in mm_struct are required. >>> >>> ah what I wanted to ask mainly was the number of the offsets/sizes used to >>> walk through all (user) tasks in a system, because makedumpfile cannot get >>> to a task's arg_start only with OFFSET(mm_struct.arg_start). Is it easy >>> enough to do it only with several vmcoreinfo entries? >> Yes, it is. Iterating over tasks requires to expose >> OFFSET(mm_struct.mmlist, and &init_mm. Then for each mm_struct, we need >> an access to "arg_start, arg_end, env_start,env_end" > > Hmm, but a Fedora 32 machine has an empty init_mm.mmlist. > (because of no used swap?) Aha, sorry that I made a mistake and mmlist is not used to organize all the mm any more. In order to access all mm_strcut in the system, init_task.tasks linked-list should be exposed, and for each task we can access its mm_struct by OFFSET(task_struct.mm), then OFFSET(mm_struct.arg_start). > > crash> p init_mm.mmlist > $1 = { > next = 0xffffffff826ee200 <init_mm+160>, > prev = 0xffffffff826ee200 <init_mm+160> > } > crash> swap > SWAP_INFO_STRUCT TYPE SIZE USED PCT PRI FILENAME > ffff8badb5385a00 PARTITION 4153340k 0k 0% -2 /dev/dm-1 > > I might be still missing something. You are right. And could you foresee any problem with my new try? Thanks, Pingfan _______________________________________________ kexec mailing list kexec@xxxxxxxxxxxxxxxxxxx http://lists.infradead.org/mailman/listinfo/kexec