Re: [PATCH 3/6] Nested VMX patch 3 implements vmptrld and vmptrst

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/03/2009 05:25 PM, Orit Wasserman wrote:

+   /*
+    * Level 2 state : includes vmcs,registers and
+    * a copy of vmcs12 for vmread/vmwrite
+    */
+   struct level_state *l2_state;
+
+   /* Level 1 state for switching to level 2 and back */
+   struct level_state *l1_state;

Can you explain why we need two of them?  in the guest vmcs we have host
and guest values, and in l1_state and l2_state we have more copies, and
in struct vcpu we have yet another set of copies.  We also have a couple
of copies in the host vmcs.  I'm getting dizzy...
L2_state stores all the L2 guest state:
       vmcs - A pointer to VMCS02, the VMCS used to run it by L0.
       shadow vmcs - a structure storing the values of VMCS12 (the vmcs L1
create to run L2).

When we support multiple nested guests, we'll run into a problem of where to store shadow_vmcs. I see these options:

- maintain a cache of limited size of shadow_vmcs; when evicting, copy the shadow_vmcs into the guest's vmptr] - always put shadow_vmcs in the guest's vmptr, and write protect it so the guest can't play with it - always put shadow_vmcs in the guest's vmptr, and verify everything you read (that's what nsvm does)

       cpu - the cpu id

Why is it needed?

       launched- launched flag

Can be part of shadow_vmcs

       vpid - the vpid allocate by L0 for L2 (we need to store it somewhere)

Note the guest can DoS the host by allocating a lot of vpids. So we to allocate host vpids on demand and be able to flush them out.

       msr_bitmap - At the moment we use L0 msr_bitmap(as we are running kvm
on kvm) in the future we will use a merge of both bitmaps.

Note kvm uses two bitmaps (for long mode and legacy mode).

L1 state stores the L1 state -
       vmcs - pointer to VMCS01

So it's the same as vmx->vmcs in normal operation?

       shadow vmcs - a structure storing the values of VMCS01. we use it
when updating VMCS02 in order to avoid the need to switch between VMCS02
and VMCS01.

Sorry, don't understand.

       cpu - the cpu id
       launched- launched flag

This is a copy of vmx->launched?

+
+   if (vmx->nested.l1_cur_vmcs != guest_vmcs_addr) {
+      vmcs_page = nested_get_page(vcpu, guest_vmcs_addr);
+      if (vmcs_page == NULL)
+         return 1;
+
+      /* load nested vmcs to processor */
+      if (vmptrld(vcpu, page_to_phys(vmcs_page))) {

So, you're loading a guest page as the vmcs.  This is dangerous as the
guest can play with it.  Much better to use inaccessible memory (and you
do alloc_vmcs() earlier?)
We can copy the vmcs and than vmptrld it. As for the allocate vmcs this is
a memory leak and I will fix it (it should be allocated only once).

But why do it? Your approach is to store the guest vmcs in the same format as the processor (which we don't really know), so you have to use vmread/vmwrite to maintain it. Instead, you can choose that the guest vmcs is a shadow_vmcs structure and then you can access it using normal memory operations.

I see.  You're using the processor's format when reading the guest
vmcs.  But we don't have to do that, we can use the shadow_vmcs
structure (and a memcpy).
I'm sorry I don't understand your comment can u elaborate ?

See previous comment.  Basically you can do

  struct shadow_vmcs *svmcs = kmap_atomic(gpa_to_page(vmx->vmptr));
  printk("guest_cs = %x\n", svmcs->guest_cs_selector);

instead of

  vmptrld(gpa_to_hpa(vmx->vmptr))
  printk("guest_cs = %x\n", vmcs_read16(GUEST_CS_SELECTOR));


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux