Re: [PATCH 1/2] KVM: arm64: Fix host stage-2 PGD refcount

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey Marc,

On Monday 04 Oct 2021 at 10:55:13 (+0100), Marc Zyngier wrote:
> Hi Quentin,
> 
> On Mon, 04 Oct 2021 10:03:13 +0100,
> Quentin Perret <qperret@xxxxxxxxxx> wrote:
> > 
> > The KVM page-table library refcounts the pages of concatenated stage-2
> > PGDs individually. However, the host's stage-2 PGD is currently managed
> > by EL2 as a single high-order compound page, which can cause the
> > refcount of the tail pages to reach 0 when they really shouldn't, hence
> > corrupting the page-table.
> 
> nit: this comment only applies to the protected mode, right? As far as
> I can tell, 'classic' KVM is just fine.

Correct, this really only applies to the host stage-2, which implies
we're in protected mode. I'll make that a bit more explicit.

> > Fix this by introducing a new hyp_split_page() helper in the EL2 page
> > allocator (matching EL1's split_page() function), and make use of it
> 
> uber nit: split_page() is not an EL1 function. more of a standard
> kernel function.

Fair enough :)

> > from host_s2_zalloc_page().
> > 
> > Fixes: 1025c8c0c6ac ("KVM: arm64: Wrap the host with a stage 2")
> > Suggested-by: Will Deacon <will@xxxxxxxxxx>
> > Signed-off-by: Quentin Perret <qperret@xxxxxxxxxx>
> > ---
> >  arch/arm64/kvm/hyp/include/nvhe/gfp.h |  1 +
> >  arch/arm64/kvm/hyp/nvhe/mem_protect.c |  6 +++++-
> >  arch/arm64/kvm/hyp/nvhe/page_alloc.c  | 14 ++++++++++++++
> >  3 files changed, 20 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h
> > index fb0f523d1492..0a048dc06a7d 100644
> > --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h
> > +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h
> > @@ -24,6 +24,7 @@ struct hyp_pool {
> >  
> >  /* Allocation */
> >  void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order);
> > +void hyp_split_page(struct hyp_page *page);
> >  void hyp_get_page(struct hyp_pool *pool, void *addr);
> >  void hyp_put_page(struct hyp_pool *pool, void *addr);
> >  
> > diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > index bacd493a4eac..93a79736c283 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > @@ -35,7 +35,11 @@ const u8 pkvm_hyp_id = 1;
> >  
> >  static void *host_s2_zalloc_pages_exact(size_t size)
> >  {
> > -	return hyp_alloc_pages(&host_s2_pool, get_order(size));
> > +	void *addr = hyp_alloc_pages(&host_s2_pool, get_order(size));
> > +
> > +	hyp_split_page(hyp_virt_to_page(addr));
> 
> The only reason this doesn't lead to a subsequent memory leak is that
> concatenated page tables are always a power of two, right?

Indeed, and also because the host stage-2 is _never_ freed, so that's
not memory we're going to reclaim anyway -- we don't have an
implementation of ->free_pages_exact() in the host stage-2 mm_ops.

> If so, that deserves a comment, because I don't think this works in
> the general case unless you actively free the pages that are between
> size and (1 << order).

Ack, that'll probably confuse me too in a few weeks, so a comment won't
hurt. I'll re-spin shortly.

Thanks,
Quentin
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm



[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux