On Wed, 09 Mar 2022 21:49:01 +0000, David Matlack <dmatlack@xxxxxxxxxx> wrote: > > On Mon, Mar 7, 2022 at 3:49 PM David Matlack <dmatlack@xxxxxxxxxx> wrote: > > > > On Sat, Mar 5, 2022 at 8:55 AM Marc Zyngier <maz@xxxxxxxxxx> wrote: > > > > > > On Fri, 04 Mar 2022 21:59:12 +0000, > > > David Matlack <dmatlack@xxxxxxxxxx> wrote: > > > > I see two alternatives to make this cleaner: > > > > > > > > 1. Dynamically allocate just this cache. The caches defined in > > > > vcpu_arch will continue to use DEFINE_KVM_MMU_MEMORY_CACHE(). This > > > > would get rid of the outer struct but require an extra memory > > > > allocation. > > > > 2. Move this cache to struct kvm_arch using > > > > DEFINE_KVM_MMU_MEMORY_CACHE(). Then we don't need to stack allocate it > > > > or dynamically allocate it. > > > > > > > > Do either of these approaches appeal to you more than the current one? > > > > > > Certainly, #2 feels more solid. Dynamic allocations (and the resulting > > > pointer chasing) are usually costly in terms of performance, so I'd > > > avoid it if at all possible. > > > > > > That being said, if it turns out that #2 isn't practical, I won't get > > > in the way of your current approach. Moving kvm_mmu_memory_cache to > > > core code was definitely a good cleanup, and I'm not overly excited > > > with the perspective of *more* arch-specific code. > > > > Ok I'll play with #2. Thanks for the feedback. > > #2 is very clean to implement but it ends up being a bit silly. It > increases the size of struct kvm_arch by 336 bytes for all VMs but > only ever gets used during kvm_vgic_map_resources(), which is only > called the first time a vCPU is run (according to the comment in > kvm_arch_vcpu_run_pid_change()). I think stack allocation makes the > most sense for this object, I don't think it's worth dancing around > that solely to avoid the inner struct grottiness. Fair enough, and thanks for having had a look. I'll look at the next version once you post it. Thanks, M. -- Without deviation from the norm, progress is not possible.