Re: [PATCH 00/10] Introduce guestmemfs: persistent in-memory filesystem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2024-08-05 at 16:41 +0200, Paolo Bonzini wrote:
> On Mon, Aug 5, 2024 at 4:35 PM Theodore Ts'o <tytso@xxxxxxx> wrote:
> > On Mon, Aug 05, 2024 at 11:32:35AM +0200, James Gowans wrote:
> > > Guestmemfs implements preservation acrosss kexec by carving out a
> > > large contiguous block of host system RAM early in boot which is
> > > then used as the data for the guestmemfs files.
> > 
> > Also, the VMM update process is not a common case thing, so we don't
> > need to optimize for performance.  If we need to temporarily use
> > swap/zswap to allocate memory at VMM update time, and if the pages
> > aren't contiguous when they are copied out before doing the VMM
> > update
> 
> I'm not sure I understand, where would this temporary allocation happen?

The intended use case for live update is to update the entirely of the
hypervisor: kexecing into a new kernel, launching new VMM processes. So
anything in kernel state (page tables, VMAs, (z)swap entries, struct
pages, etc) is all lost after kexec and needs to be re-created. That's
the job of guestmemfs: provide the persistence across kexec and ability
to re-create the mapping by re-opening the files.

It would be far too impactful to need to write out the whole VM memory
to disk. Also with CoCo VMs that's not really possible. When virtual
machines are running, every millisecond of down time counts. It would be
wasteful to need to keep terabytes of SSDs lying around just to briefly
write all the guest RAM there and then read it out a moment later. Much
better to leave all the guest memory where it is: in memory.

> 
> > that might be very well worth the vast of of memory needed to
> > pay for reserving memory on the host for the VMM update that only
> > might happen once every few days/weeks/months (depending on whether
> > you are doing update just for high severity security fixes, or for
> > random VMM updates).
> > 
> > Even if you are updating the VMM every few days, it still doesn't seem
> > that permanently reserving contiguous memory on the host can be
> > justified from a TCO perspective.
> 
> As far as I understand, this is intended for use in systems that do
> not do anything except hosting VMs, where anyway you'd devote 90%+ of
> host memory to hugetlbfs gigapages.

Exactly, the use case here is for machines whose only job is to be a KVM
hypervisor. The majority of system RAM is donated to guestmemfs;
anything else (host kernel memory and VMM anonymous memory) is
essentially overhead and should be minimised.

JG




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux