Re: [PATCH v12 7/7] x86/crash: Add x86 crash hotplug support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/27/22 at 02:28pm, Eric DeVolder wrote:
> 
> 
> On 10/27/22 08:52, Baoquan He wrote:
> > On 10/26/22 at 04:54pm, David Hildenbrand wrote:
> > > On 26.10.22 16:48, Baoquan He wrote:
> > > > On 10/25/22 at 12:31pm, Borislav Petkov wrote:
> > > > > On Thu, Oct 13, 2022 at 10:57:28AM +0800, Baoquan He wrote:
> > > > > > The concern to range number mainly is on Virt guest systems.
> > > > > 
> > > > > And why would virt emulate 1K hotpluggable DIMM slots and not emulate a
> > > > > real machine?
> > > 
> > > IIRC, ACPI only allows for 256 slots. PPC dlpar might provide more.
> > > 
> > > > 
> > > > Well, currently, mem hotpug is an important feature on virt system to
> > > > dynamically increase/shrink memory on the system. If only emulating real
> > > > machine, it won't be different than bare metal system.
> > > > 
> > > > IIRC, the ballon driver or virtio-mem feature can add memory board, e.g
> > > > 1G, block size is 128M, 8 blocks added. When shrinking this 1G memory
> > > > later, it will take best effort way to hot remove memory. Means if any
> > > > memory block is occupied, it will be kept there. Finally we could only
> > > > remove every second blocks, 4 blocks altogether. Then the left
> > > > un-removed blocks will produce 4 separate memory regions. Like this, a
> > > > virt guest could have many memory regions in kernel after memory
> > > > being added/removed.
> > > > 
> > > > If I am wrong, Please correct me, David.
> > > 
> > > Yes, virtio-mem (but also PPC dlpar) can result in many individual memory
> > > blocks with holes in between after hotunplug. Hotplug OTOH, usually tries to
> > > "plug" these holes and reduce the total number of memory blocks. It might be
> > > rare that our range will be heavily fragmented after unplug, but it's
> > > certainly possible.
> > > 
> > > [...]
> > > 
> > > > 
> > > > Yes, now assume we have a HPE SGI system and it has memory hotplug
> > > > capacity. The system itself has already got memory regions more than
> > > > 1024. Then when we hot add extra memory board, we want to include the
> > > > newly added memory regions into elfcorehdr so that it will be dumped out
> > > > in kdump kernel.
> > > > 
> > > > That's why I earlier suggested 2048 for number of memory regions.
> > > 
> > > The more the better, unless "it hurts". Assuming a single memory block is
> > > 128 MiB, that would be 256 GiB.
> > > 
> > > Usually, on big systems, the memory block size is 2 GiB. So 4 TiB.
> > 
> > Thanks a lot for these valuable inputs, David.
> > 
> > Hi Boris, Eric
> > 
> > So what's your suggested value for the Kconfig option?
> > 
> > 1) cpu number, 1024?
> > 2) memory regions, 2048?
> > 
> > About below draft, any comment? We can decide a value based on our
> > knowledge, can adjust later if any real system has more than the number.
> > 
> > +config CRASH_ELF_CORE_PHDRS_NUM
> > +       depends on CRASH_DUMP && KEXEC_FILE && (HOTPLUG_CPU || MEMORY_HOTPLUG)
> > +       int
> > +       default 3072
> > +       help
> > +         For the kexec_file_load path, specify the default number of
> > +         phdr for the vmcore. E.g the memory regions represented by the
> > +         'System RAM' entries in /proc/iomem, the cpu notes of each
> > +         present cpu stored in /sys/devices/system/cpu/cpuX/crash_notes.
> > 
> > Thanks
> > Baoquan
> > 
> 
> I prefer to keep CRASH_MAX_MEMORY_RANGES, as explained in my response to your message on October 26.
> eric

Ah, sorry, I mixed it up with NR_CPUS. I went on an office outing
yesterday, glad to see you and Boris have made an agreement on the code
change and value. Thanks.


> 


_______________________________________________
kexec mailing list
kexec@xxxxxxxxxxxxxxxxxxx
http://lists.infradead.org/mailman/listinfo/kexec



[Index of Archives]     [LM Sensors]     [Linux Sound]     [ALSA Users]     [ALSA Devel]     [Linux Audio Users]     [Linux Media]     [Kernel]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux