Re: [PATCH 19/23] KVM: Allow for different capacities in kvm_mmu_memory_cache structs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 03 Feb 2022 01:00:47 +0000,
David Matlack <dmatlack@xxxxxxxxxx> wrote:
> 
> Allow the capacity of the kvm_mmu_memory_cache struct to be chosen at
> declaration time rather than being fixed for all declarations. This will
> be used in a follow-up commit to declare an cache in x86 with a capacity
> of 512+ objects without having to increase the capacity of all caches in
> KVM.
> 
> No functional change intended.
> 
> Signed-off-by: David Matlack <dmatlack@xxxxxxxxxx>
> ---
>  arch/arm64/include/asm/kvm_host.h |  2 +-
>  arch/arm64/kvm/mmu.c              | 12 ++++++------
>  arch/mips/include/asm/kvm_host.h  |  2 +-
>  arch/x86/include/asm/kvm_host.h   |  8 ++++----
>  include/linux/kvm_types.h         | 24 ++++++++++++++++++++++--
>  virt/kvm/kvm_main.c               |  8 +++++++-
>  6 files changed, 41 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 3b44ea17af88..a450b91cc2d9 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -357,7 +357,7 @@ struct kvm_vcpu_arch {
>  	bool pause;
>  
>  	/* Cache some mmu pages needed inside spinlock regions */
> -	struct kvm_mmu_memory_cache mmu_page_cache;
> +	DEFINE_KVM_MMU_MEMORY_CACHE(mmu_page_cache);

I must say I'm really not a fan of the anonymous structure trick. I
can see why you are doing it that way, but it feels pretty brittle.

>  
>  	/* Target CPU and feature flags */
>  	int target;
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index bc2aba953299..9c853c529b49 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -765,7 +765,8 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
>  {
>  	phys_addr_t addr;
>  	int ret = 0;
> -	struct kvm_mmu_memory_cache cache = { 0, __GFP_ZERO, NULL, };
> +	DEFINE_KVM_MMU_MEMORY_CACHE(cache) page_cache = {};
> +	struct kvm_mmu_memory_cache *cache = &page_cache.cache;
>  	struct kvm_pgtable *pgt = kvm->arch.mmu.pgt;
>  	enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_DEVICE |
>  				     KVM_PGTABLE_PROT_R |
> @@ -774,18 +775,17 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
>  	if (is_protected_kvm_enabled())
>  		return -EPERM;
>  
> +	cache->gfp_zero = __GFP_ZERO;

nit: consider this instead, which preserves the existing flow:

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 26d6c53be083..86a7ebd03a44 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -764,7 +764,9 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
 {
 	phys_addr_t addr;
 	int ret = 0;
-	DEFINE_KVM_MMU_MEMORY_CACHE(cache) page_cache = {};
+	DEFINE_KVM_MMU_MEMORY_CACHE(cache) page_cache = {
+		.cache = { .gfp_zero = __GFP_ZERO},
+	};
 	struct kvm_mmu_memory_cache *cache = &page_cache.cache;
 	struct kvm_pgtable *pgt = kvm->arch.mmu.pgt;
 	enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_DEVICE |
@@ -774,7 +776,6 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
 	if (is_protected_kvm_enabled())
 		return -EPERM;
 
-	cache->gfp_zero = __GFP_ZERO;
 	size += offset_in_page(guest_ipa);
 	guest_ipa &= PAGE_MASK;
 
but whole "declare the outer structure and just use the inner one"
hack is... huh... :-/

This hunk also conflicts with what currently sits in -next. Not a big
deal, but just so you know.

> diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
> index dceac12c1ce5..9575fb8d333f 100644
> --- a/include/linux/kvm_types.h
> +++ b/include/linux/kvm_types.h
> @@ -78,14 +78,34 @@ struct gfn_to_pfn_cache {
>   * MMU flows is problematic, as is triggering reclaim, I/O, etc... while
>   * holding MMU locks.  Note, these caches act more like prefetch buffers than
>   * classical caches, i.e. objects are not returned to the cache on being freed.
> + *
> + * The storage for the cache objects is laid out after the struct to allow
> + * different declarations to choose different capacities. If the capacity field
> + * is 0, the capacity is assumed to be KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE.
>   */
>  struct kvm_mmu_memory_cache {
>  	int nobjs;
> +	int capacity;
>  	gfp_t gfp_zero;
>  	struct kmem_cache *kmem_cache;
> -	void *objects[KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE];
> +	void *objects[0];

The VLA police is going to track you down ([0] vs []).

	M.

-- 
Without deviation from the norm, progress is not possible.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux