Re: [PATCH 07/29] nVMX: Hold a vmcs02 for each vmcs12

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Sun, Jan 30, 2011, Avi Kivity wrote about "Re: [PATCH 07/29] nVMX: Hold a vmcs02 for each vmcs12":
> >+/*
> >+ * Allocate an L0 VMCS (vmcs02) for the current L1 VMCS (vmcs12), if one
> >+ * does not already exist. The allocation is done in L0 memory, so to 
> >avoid
> >+ * denial-of-service attack by guests, we limit the number of 
> >concurrently-
> >+ * allocated vmcss. A well-behaving L1 will VMCLEAR unused vmcs12s and not
> >+ * trigger this limit.
> 
> No, it won't.  If you run N guests on a single-cpu kvm host, you'll have 
> N active VMCSs.

Of course. What I said was that *unused* vmcs12s (in the sense that they
don't describe any active guest) will normally be unloaded (VMCLEARed) by L1
and so will not take up space. Only VMCSs actually being used to run guests
will take up space. If you have N running guests, then right, you'll have
N VMCSs. I put the limit at 256, which on one hand allows L1 to run 256 L2s
(which I think is well above what people will normally run on one CPU), and
on the other hand limits the amount of damage that a malicious L1 can do:
At worst it can cause the host to allocate (and pin) 256 extra VMCSs, which
sum up to 1 MB.

I thought that this compromise was good enough, and you didn't, and although
I still don't understand why, I promise I will change it. I'll make this
change my top priority now.

> >+static void __nested_free_saved_vmcs(void *arg)
> >+{
> >+	struct saved_vmcs *saved_vmcs = arg;
> >+	int cpu = raw_smp_processor_id();
> >+
> >+	if (saved_vmcs->cpu == cpu) /* TODO: how can this not be the case? */
> >+		vmcs_clear(saved_vmcs->vmcs);
> 
> This check will always be true.

This is what I thought too... I call this function on the saved_vmcs->cpu
cpu, so there's no reason why it would find itself being called on a different
cpu.

The only reason I added this sanity check was that __vcpu_clear has the same
one, and there too it seemed redundant, and I thought maybe I was missing
something. Do you know why __vcpu_clear needs this test?

> 
> >+	if (per_cpu(current_vmcs, cpu) == saved_vmcs->vmcs)
> >+		per_cpu(current_vmcs, cpu) = NULL;
> 
> And this will always be false, no?  Unless you free a vmcs02 while you 
> use it?  Don't you always switch back to vmcs01 prior to freeing?
>..
> Maybe this is the counterexample - we kill a vcpu while it is in nested 
> mode.

Right, this is what I had in mind.

-- 
Nadav Har'El                        |      Monday, Jan 31 2011, 26 Shevat 5771
nyh@xxxxxxxxxxxxxxxxxxx             |-----------------------------------------
Phone +972-523-790466, ICQ 13349191 |I planted some bird seed. A bird came up.
http://nadav.harel.org.il           |Now I don't know what to feed it...
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux