On 09/25/2009 03:47 AM, Zachary Amsden wrote:
In the process of bringing down CPUs, the SVM / VMX structures associated with those CPUs are not freed. This may cause leaks when unloading and reloading the KVM module, as only the structures associated with online CPUs are cleaned up. So, clean up all possible CPUs, not just online ones. Signed-off-by: Zachary Amsden<zamsden@xxxxxxxxxx> --- arch/x86/kvm/svm.c | 2 +- arch/x86/kvm/vmx.c | 7 +++++-- 2 files changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 8f99d0c..13ca268 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -525,7 +525,7 @@ static __exit void svm_hardware_unsetup(void) { int cpu; - for_each_online_cpu(cpu) + for_each_possible_cpu(cpu) svm_cpu_uninit(cpu); __free_pages(pfn_to_page(iopm_base>> PAGE_SHIFT), IOPM_ALLOC_ORDER); diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index b8a8428..603bde3 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -1350,8 +1350,11 @@ static void free_kvm_area(void) { int cpu; - for_each_online_cpu(cpu) - free_vmcs(per_cpu(vmxarea, cpu)); + for_each_possible_cpu(cpu) + if (per_cpu(vmxarea, cpu)) { + free_vmcs(per_cpu(vmxarea, cpu)); + per_cpu(vmxarea, cpu) = NULL; + } } static __init int alloc_kvm_area(void)
First, I'm not sure per_cpu works for possible but not actual cpus. Second, we now eagerly allocate but lazily free, leading to lots of ifs and buts. I think the code can be cleaner by eagerly allocating and eagerly freeing.
-- Do not meddle in the internals of kernels, for they are subtle and quick to panic. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html