Re: [PATCH 19/23] KVM: Allow for different capacities in kvm_mmu_memory_cache structs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 07 Mar 2022 23:49:06 +0000,
David Matlack <dmatlack@xxxxxxxxxx> wrote:
> 
> On Sat, Mar 5, 2022 at 8:55 AM Marc Zyngier <maz@xxxxxxxxxx> wrote:
> >
> > On Fri, 04 Mar 2022 21:59:12 +0000,
> > David Matlack <dmatlack@xxxxxxxxxx> wrote:
> > >
> > > On Thu, Feb 24, 2022 at 11:20 AM David Matlack <dmatlack@xxxxxxxxxx> wrote:
> > > >
> > > > On Thu, Feb 24, 2022 at 3:29 AM Marc Zyngier <maz@xxxxxxxxxx> wrote:
> > > > >
> > > > > On Thu, 03 Feb 2022 01:00:47 +0000,
> > > > > David Matlack <dmatlack@xxxxxxxxxx> wrote:
> > > > > >
> > >
> > > [...]
> > >
> > > > > >
> > > > > >       /* Cache some mmu pages needed inside spinlock regions */
> > > > > > -     struct kvm_mmu_memory_cache mmu_page_cache;
> > > > > > +     DEFINE_KVM_MMU_MEMORY_CACHE(mmu_page_cache);
> > > > >
> > > > > I must say I'm really not a fan of the anonymous structure trick. I
> > > > > can see why you are doing it that way, but it feels pretty brittle.
> > > >
> > > > Yeah I don't love it. It's really optimizing for minimizing the patch diff.
> > > >
> > > > The alternative I considered was to dynamically allocate the
> > > > kvm_mmu_memory_cache structs. This would get rid of the anonymous
> > > > struct and the objects array, and also eliminate the rather gross
> > > > capacity hack in kvm_mmu_topup_memory_cache().
> > > >
> > > > The downsides of this approach is more code and more failure paths if
> > > > the allocation fails.
> > >
> > > I tried changing all kvm_mmu_memory_cache structs to be dynamically
> > > allocated, but it created a lot of complexity to the setup/teardown
> > > code paths in x86, arm64, mips, and riscv (the arches that use the
> > > caches). I don't think this route is worth it, especially since these
> > > structs don't *need* to be dynamically allocated.
> > >
> > > When you said the anonymous struct feels brittle, what did you have in
> > > mind specifically?
> >
> > I can perfectly see someone using a kvm_mmu_memory_cache and searching
> > for a bit why they end-up with memory corruption. Yes, this would be a
> > rookie mistake, but this are some expectations all over the kernel
> > that DEFINE_* and the corresponding structure are the same object.
> 
> That is a good point. And that risk is very real given that
> kvm_mmu_topup_memory_cache() assumes the capacity is
> KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE if the capacity field is 0.

Exactly. I like being surprised as much as the next one, but I'm not
sure about this particular instance ;-).

> One way to mitigate this would be to get rid of the capacity hack in
> kvm_mmu_topup_memory_cache() and require the capacity field be
> explicitly initialized. That will make it harder to trip over this
> and/or easier to debug because kvm_mmu_topup_memory_cache() can issue
> a WARN() if the capacity is 0. Once you see that warning and go to
> initialize the capacity field you'll realize why it needs to be set in
> the first place. The diff will just be slightly larger to set capacity
> for each cache.

That'd be fine. I don't mind the extra diff if this can be made more
or less foolproof.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux