Re: [PATCH 0/4] kdump: crashkernel reservation from CMA

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 7 Dec 2023 09:55:20 +0100
Michal Hocko <mhocko@xxxxxxxx> wrote:

> On Thu 07-12-23 12:23:13, Baoquan He wrote:
> [...]
> > We can't guarantee how swift the DMA transfer could be in the cma, case,
> > it will be a venture.  
> 
> We can't guarantee this of course but AFAIK the DMA shouldn't take
> minutes, right? While not perfect, waiting for some time before jumping
> into the crash kernel should be acceptable from user POV and it should
> work around most of those potential lingering programmed DMA transfers.

I don't think that simply waiting is acceptable. For one it doesn't
guarantee that there is no corruption (please also see below) but only
reduces its probability. Furthermore, how long would you wait?
Thing is that users don't only want to reduce the memory usage but also
the downtime of kdump. In the end I'm afraid that "simply waiting" will
make things unnecessarily more complex without really solving any issue.

> So I guess what we would like to hear from you as kdump maintainers is
> this. Is it absolutely imperative that these issue must be proven
> impossible or is a best effort approach something worth investing time
> into? Because if the requirement is an absolute guarantee then I simply
> do not see any feasible way to achieve the goal of reusable memory.
> 
> Let me reiterate that the existing reservation mechanism is showing its
> limits for production systems and I strongly believe this is something
> that needs addressing because crash dumps are very often the only tool
> to investigate complex issues.

Because having a crash dump is so important I want a prove that no
legal operation can corrupt the crashkernel memory. The easiest way to
achieve this is by simply keeping the two memory regions fully
separated like it is today. In theory it should also be possible to
prevent any kind of page pinning in the shared crashkernel memory. But
I don't know which side effect this has for mm. Such an idea needs to
be discussed on the mm mailing list first.

Finally, let me question whether the whole approach actually solves
anything. For me the difficulty in determining the correct crashkernel
memory is only a symptom. The real problem is that most developers
don't expect their code to run outside their typical environment.
Especially not in an memory constraint environment like kdump. But that
problem won't be solved by throwing more memory at it as this
additional memory will eventually run out as well. In the end we are
back at the point where we are today but with more memory.

Finally finally, one tip. Next time a customer complaints about how
much memory the crashkernel "wastes" ask them how much one day of down
time for one machine costs them and how much memory they could buy for
that money. After that calculation I'm pretty sure that an additional
100M of crashkernel memory becomes much more tempting.

Thanks
Philipp


_______________________________________________
kexec mailing list
kexec@xxxxxxxxxxxxxxxxxxx
http://lists.infradead.org/mailman/listinfo/kexec



[Index of Archives]     [LM Sensors]     [Linux Sound]     [ALSA Users]     [ALSA Devel]     [Linux Audio Users]     [Linux Media]     [Kernel]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux