Re: [RFC PATCH v2 0/7] Introduce persistent memory pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 28, 2023 at 11:00:12AM -0700, Dave Hansen wrote:
> On 9/27/23 17:02, Stanislav Kinsburskii wrote:
> > On Thu, Sep 28, 2023 at 10:29:32AM -0700, Dave Hansen wrote:
> ...
> > Well, not exactly. That's something I'd like to have indeed, but from my
> > POV this goal is out of scope of discussion at the moment.
> > Let me try to express it the same way you did above:
> > 
> > 1. Boot some kernel
> > 2. Grow the deposited memory a bunch
> > 5. Kexec
> > 4. Kernel panic due to GPF upon accessing the memory deposited to
> > hypervisor.
> 
> I basically consider this a bug in the first kernel.  It *can't* kexec
> when it's left RAM in shambles.  It doesn't know what features the new
> kernel has and whether this is even safe.
> 

Could you elaborate more on why this is a bug in the first kernel?
Say, kernel memory can be allocated in big physically consequitive
chunks by the first kernel for depositing. The information about these
chunks is then passed the the second kernel via FDT or even command
line, so the seconds kernel can reserve this region during booting.
What's wrong with this approach?

> Can the new kernel even read the new device tree data?
> 

I'm not sure I understand the question, to be honest.
Why can't it? This series contains code parts for both first and seconds
kernels.

> >> Can't the deposited memory just be shrunk before kexec?  Surely there
> >> aren't a bunch of pathological things consuming that memory right before
> >> kexec, which is basically a reboot.
> > 
> > In general it can. But for this to happen hypervisor needs to release
> > this memory. And it can release the memory iff the guests are stopped.
> > And stopping the guests during kexec isn't something we want to have in the
> > long run.
> > Also, even if we stop the guests before kexec, we need to restart them
> > after boot meaning we have to deposit the pages once again.
> > All this: stopping the guests, withdrawing the pages upon kexec,
> > allocating after boot and depostiting them again significatnly affect
> > guests downtime.
> 
> Ahh, and you're presumably kexec'ing in the first place because you've
> got a bug in the first kernel and you want a second kernel with fewer bugs.
> 

Right. All this is for "kernel servicing" purposes, when kexec is used
to update the kernel in a fleet with in attempt to reduce users downtime
as mush as possible.
I'm sorry for keeping this bit of context to myself instead of
explicitly stating it the series description: it wasn't intentional.

> I still think the only way this will possibly work when kexec'ing both
> old and new kernels is to do it with the memory maps that *all* kernels
> can read.
> 

Could you elaborate more on this?
The avaiable memory map actually stays the same for both kernels. The
difference here can be in a different list of memory regions to reserve,
when the first kernel allocated and deposited another chunk, and thus
the second kernel needs to reserve this memory as a new region upon
booting.

Can all this considered, as, say, the first kernel uses device tree to
inform the second kernel about the memory regions to reserve?
In this case the first kernel behaves a bit like a firmware piece for
the second one.

> Can the hypervisor be improved to make this release operation faster?

I guess it can, but shutting down guests contributes to downtime the
most. And without shutting down the guests the deposited memory can't be
withdrawn.

Thanks,
Stanislav

_______________________________________________
kexec mailing list
kexec@xxxxxxxxxxxxxxxxxxx
http://lists.infradead.org/mailman/listinfo/kexec



[Index of Archives]     [LM Sensors]     [Linux Sound]     [ALSA Users]     [ALSA Devel]     [Linux Audio Users]     [Linux Media]     [Kernel]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux