On Fri, Jul 03, 2020 at 03:39:08PM +0200, David Hildenbrand wrote: > This series is based on the latest s390/features branch [1]. It implements > vmemmap_free(), consolidating it with vmem_add_range(), and optimizes it by > - Freeing empty page tables (now also done for idendity mapping). > - Handling cases where the vmemmap of a section does not fill huge pages > completely. > > vmemmap_free() is currently never used, unless adiing standby memory fails > (unlikely). This is relevant for virtio-mem, which adds/removes memory > in memory block/section granularity (always removes memory in the same > granularity it added it). > > I gave this a proper test with my virtio-mem prototype (which I will share > once the basic QEMU implementation is upstream), both with 56 byte memmap > per page and 64 byte memmap per page, with and without huge page support. > In both cases, removing memory (routed through arch_remove_memory()) will > result in > - all populated vmemmap pages to get removed/freed > - all applicable page tables for the vmemmap getting removed/freed > - all applicable page tables for the idendity mapping getting removed/freed > Unfortunately, I don't have access to bigger and z/VM (esp. dcss) > environments. > > This is the basis for real memory hotunplug support for s390x and should > complete my journey to s390x vmem/vmemmap code for now :) > > What needs double-checking is tlb flushing. AFAIKS, as there are no valid > accesses, doing a single range flush at the end is sufficient, both when > removing vmemmap pages and the idendity mapping. > > Along, some minor cleanups. Hmm.. I really would like to see if there would be only a single page table walker left in vmem.c, which handles both adding and removing things. Now we end up with two different page table walk implementations within the same file. However not sure if it is worth the effort to unify them though.