On Mon, 2024-08-05 at 10:32 -0400, Theodore Ts'o wrote: > On Mon, Aug 05, 2024 at 11:32:35AM +0200, James Gowans wrote: > > Guestmemfs implements preservation acrosss kexec by carving out a > > large contiguous block of host system RAM early in boot which is > > then used as the data for the guestmemfs files. > > Why does the memory have to be (a) contiguous, and (b) carved out of > *host* system memory early in boot? This seems to be very inflexible; > it means that you have to know how much memory will be needed for > guestmemfs in early boot. The main reason for both of these is to guarantee that the huge (2 MiB PMD) and gigantic (1 GiB PUD) allocations can happen. While this patch series only does huge page allocations for simplicity, the intention is to extend it to gigantic PUD level allocations soon (I'd like to get the simple functionality merged before adding more complexity). Other than doing a memblock allocation at early boot there really is no way that I know of to do GiB-size allocations dynamically. In terms of the need for a contiguous chunk, that's a bit of a simplification for now. As mentioned in the cover letter there currently isn't any NUMA support in this patch series. We'd want to add the ability to do NUMA handling in following patch series. In that case it would be multiple contiguous allocations, one for each NUMA node that the user wants to run VMs on. JG