Re: nvdimm,pmem: makedumpfile: __vtop4_x86_64: Can't get a valid pte.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 01/12/2022 04:05, Dan Williams wrote:
> lizhijian@xxxxxxxxxxx wrote:
>> Hi folks,
>>
>> I'm going to make crash coredump support pmem region. So
>> I have modified kexec-tools to add pmem region to PT_LOAD of vmcore.
>>
>> But it failed at makedumpfile, log are as following:
>>
>> In my environment, i found the last 512 pages in pmem region will cause the error.
>>
>> qemu commandline:
>>    -object memory-backend-file,id=memnvdimm0,prealloc=yes,mem-path=/root/qemu-dax.img,share=yes,size=4267704320,align=2097152
>> -device nvdimm,node=0,label-size=4194304,memdev=memnvdimm0,id=nvdimm0,slot=0
>>
>> ndctl info:
>> [root@rdma-server ~]# ndctl list
>> [
>>     {
>>       "dev":"namespace0.0",
>>       "mode":"devdax",
>>       "map":"dev",
>>       "size":4127195136,
>>       "uuid":"f6fc1e86-ac5b-48d8-9cda-4888a33158f9",
>>       "chardev":"dax0.0",
>>       "align":4096
>>     }
>> ]
>> [root@rdma-server ~]# ndctl list -iRD
>> {
>>     "dimms":[
>>       {
>>         "dev":"nmem0",
>>         "id":"8680-56341200",
>>         "handle":1,
>>         "phys_id":0
>>       }
>>     ],
>>     "regions":[
>>       {
>>         "dev":"region0",
>>         "size":4263510016,
>>         "align":16777216,
>>         "available_size":0,
>>         "max_available_extent":0,
>>         "type":"pmem",
>>         "iset_id":10248187106440278,
>>         "mappings":[
>>           {
>>             "dimm":"nmem0",
>>             "offset":0,
>>             "length":4263510016,
>>             "position":0
>>           }
>>         ],
>>         "persistence_domain":"unknown"
>>       }
>>     ]
>> }
>>
>> iomem info:
>> [root@rdma-server ~]# cat /proc/iomem  | grep Persi
>> 140000000-23e1fffff : Persistent Memory
>>
>> makedumpfile info:
>> [   57.229110] kdump.sh[240]: mem_map[  71] ffffea0008e00000           238000           23e200
>>
>>
>> Firstly, i wonder that
>> 1) makedumpfile read the whole range of iomem(same with the PT_LOAD of pmem)
>> 2) 1st kernel side only setup mem_map(vmemmap) for this namespace, which size is 512 pages smaller than iomem for some reasons.
>> 3) Since there is an align in nvdimm region(16MiB in above), i also guess the maximum size of the pmem can used by user should
>> be ALIGN(iomem, 10MiB), after this alignment, the last 512 pages will be dropped. then kernel only setups vmemmap for this
>> range. but i didn't see any code doing such things in kernel side.
>>
>> So if you guy know the reasons, please let me know :), any hint/feedback is very welcome.
> 
> This is due to the region alignment.
> 
> 2522afb86a8c libnvdimm/region: Introduce an 'align' attribute
> 

Dan,

Thank you very much,  That's exactly the reason.



> If you want to use the full capacity it would be something like this
> (untested, and may destroy any data currently on the namespace):
> 
> ndctl destroy-namespace namespace0.0
> echo $((2<<20)) > /sys/bus/nd/devices/region0/align
> ndctl create-namespace -m dax -a 4k -M mem
> 

It works for me, but the alignment will reset to 16MiB after reboot. Is this expected ?


Thanks
Zhijian


> _______________________________________________
> kexec mailing list
> kexec@xxxxxxxxxxxxxxxxxxx
> http://lists.infradead.org/mailman/listinfo/kexec




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux