Hi Marc,
On 25-11-2021 07:53 pm, Marc Zyngier wrote:
On Mon, 22 Nov 2021 09:58:03 +0000,
Ganapatrao Kulkarni <gankulkarni@xxxxxxxxxxxxxxxxxxxxxx> wrote:
Commit 1776c91346b6 ("KVM: arm64: nv: Support multiple nested Stage-2 mmu
structures")[1] added a function kvm_vcpu_init_nested which expands the
stage-2 mmu structures array when ever a new vCPU is created. The array
is expanded using krealloc() and results in a stale mmu address pointer
in pgt->mmu. Adding a fix to update the pointer with the new address after
successful krealloc.
[1] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/
branch kvm-arm64/nv-5.13
Signed-off-by: Ganapatrao Kulkarni <gankulkarni@xxxxxxxxxxxxxxxxxxxxxx>
---
arch/arm64/kvm/nested.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 4ffbc14d0245..57ad8d8f4ee5 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -68,6 +68,8 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
num_mmus * sizeof(*kvm->arch.nested_mmus),
GFP_KERNEL | __GFP_ZERO);
if (tmp) {
+ int i;
+
if (kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 1]) ||
kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 2])) {
kvm_free_stage2_pgd(&tmp[num_mmus - 1]);
@@ -80,6 +82,13 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
}
kvm->arch.nested_mmus = tmp;
+
+ /* Fixup pgt->mmu after krealloc */
+ for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
+ struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
+
+ mmu->pgt->mmu = mmu;
+ }
}
mutex_unlock(&kvm->lock);
Another good catch. I've tweaked a bit to avoid some unnecessary
repainting, see below.
Thanks again,
M.
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index a4dfffa1dae0..92b225db59ac 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -66,8 +66,19 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
num_mmus = atomic_read(&kvm->online_vcpus) * 2;
tmp = krealloc(kvm->arch.nested_mmus,
num_mmus * sizeof(*kvm->arch.nested_mmus),
- GFP_KERNEL | __GFP_ZERO);
+ GFP_KERNEL_ACCOUNT | __GFP_ZERO);
if (tmp) {
+ /*
+ * If we went through a realocation, adjust the MMU
Is it more precise to say?
+ * back-pointers in the pg_table structures.
* back-pointers in the pg_table structures of previous inits.
+ */
+ if (kvm->arch.nested_mmus != tmp) {
+ int i;
+
+ for (i = 0; i < num_mms - 2; i++)
+ tmp[i].pgt->mmu = &tmp[i];
+ }
Thanks for this optimization, it saves 2 redundant iterations.
+
if (kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 1]) ||
kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 2])) {
kvm_free_stage2_pgd(&tmp[num_mmus - 1]);
Feel free to add,
Reviewed-by: Ganapatrao Kulkarni <gankulkarni@xxxxxxxxxxxxxxxxxxxxxx>
Thanks,
Ganapat