Re: [PATCH 07/29] nVMX: Hold a vmcs02 for each vmcs12

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01/31/2011 11:26 AM, Nadav Har'El wrote:
Hi,

On Sun, Jan 30, 2011, Avi Kivity wrote about "Re: [PATCH 07/29] nVMX: Hold a vmcs02 for each vmcs12":
>  >+/*
>  >+ * Allocate an L0 VMCS (vmcs02) for the current L1 VMCS (vmcs12), if one
>  >+ * does not already exist. The allocation is done in L0 memory, so to
>  >avoid
>  >+ * denial-of-service attack by guests, we limit the number of
>  >concurrently-
>  >+ * allocated vmcss. A well-behaving L1 will VMCLEAR unused vmcs12s and not
>  >+ * trigger this limit.
>
>  No, it won't.  If you run N guests on a single-cpu kvm host, you'll have
>  N active VMCSs.

Of course. What I said was that *unused* vmcs12s (in the sense that they
don't describe any active guest) will normally be unloaded (VMCLEARed) by L1
and so will not take up space. Only VMCSs actually being used to run guests
will take up space. If you have N running guests, then right, you'll have
N VMCSs. I put the limit at 256, which on one hand allows L1 to run 256 L2s
(which I think is well above what people will normally run on one CPU), and
on the other hand limits the amount of damage that a malicious L1 can do:
At worst it can cause the host to allocate (and pin) 256 extra VMCSs, which
sum up to 1 MB.

I thought that this compromise was good enough, and you didn't, and although
I still don't understand why, I promise I will change it. I'll make this
change my top priority now.

VM crashes on legal guest behaviour are bad; and since it's easy to avoid, why do it?

>  >+static void __nested_free_saved_vmcs(void *arg)
>  >+{
>  >+	struct saved_vmcs *saved_vmcs = arg;
>  >+	int cpu = raw_smp_processor_id();
>  >+
>  >+	if (saved_vmcs->cpu == cpu) /* TODO: how can this not be the case? */
>  >+		vmcs_clear(saved_vmcs->vmcs);
>
>  This check will always be true.

This is what I thought too... I call this function on the saved_vmcs->cpu
cpu, so there's no reason why it would find itself being called on a different
cpu.

The only reason I added this sanity check was that __vcpu_clear has the same
one, and there too it seemed redundant, and I thought maybe I was missing
something. Do you know why __vcpu_clear needs this test?

__vcpu_clear() can race with itself (called from the cpu migration path vs. cpu offline path)


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux