On 08/06/2011 01:39 PM, Christoffer Dall wrote: > Provides complete world-switch implementation to switch to other guests > runinng in non-secure modes. Includes Hyp exception handlers that > captures necessary exception information and stores the information on > the VCPU and KVM structures. > > Switching to Hyp mode is done through a simple HVC instructions. The > exception vector code will check that the HVC comes from VMID==0 and if > so will store the necessary state on the Hyp stack, which will look like > this (see hyp_hvc): > ... > Hyp_Sp + 4: lr_usr > Hyp_Sp : spsr (Host-SVC cpsr) > > When returning from Hyp mode to SVC mode, another HVC instruction is > executed from Hyp mode, which is taken in the Hyp_Svc handler. The Hyp > stack pointer should be where it was left from the above initial call, > since the values on the stack will be used to restore state (see > hyp_svc). > > Otherwise, the world-switch is pretty straight-forward. All state that > can be modified by the guest is first backed up on the Hyp stack and the > VCPU values is loaded onto the hardware. State, which is not loaded, but > theoretically modifiable by the guest is protected through the > virtualiation features to generate a trap and cause software emulation. > Upon guest returns, all state is restored from hardware onto the VCPU > struct and the original state is restored from the Hyp-stack onto the > hardware. > > One controversy may be the back-door call to __irq_svc (the host > kernel's own physical IRQ handler) which is called when a physical IRQ > exception is taken in Hyp mode while running in the guest. > > > void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu) > { > + unsigned long start, end; > + > latest_vcpu = NULL; > - KVMARM_NOT_IMPLEMENTED(); > + > + start = (unsigned long)vcpu, > + end = start + sizeof(struct kvm_vcpu); > + remove_hyp_mappings(kvm_hyp_pgd, start, end); What if vcpu shares a page with another mapped structure? > + > + kmem_cache_free(kvm_vcpu_cache, vcpu); > } > return 0; > } > > +/** > + * kvm_arch_vcpu_ioctl_run - the main VCPU run function to execute guest code > + * @vcpu: The VCPU pointer > + * @run: The kvm_run structure pointer used for userspace state exchange > + * > + * This function is called through the VCPU_RUN ioctl called from user space. It > + * will execute VM code in a loop until the time slice for the process is used > + * or some emulation is needed from user space in which case the function will > + * return with return value 0 and with the kvm_run structure filled in with the > + * required data for the requested emulation. > + */ > int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) > { > - KVMARM_NOT_IMPLEMENTED(); > - return -EINVAL; > + unsigned long flags; > + int ret; > + > + for (;;) { > + trace_kvm_entry(vcpu->arch.regs.pc); > + debug_ws_enter(vcpu->arch.regs.pc); why both trace_kvm and debug_ws? > + kvm_guest_enter(); > + > + local_irq_save(flags); local_irq_disable() is likely sufficient - the call path never changes. > + ret = __kvm_vcpu_run(vcpu); > + local_irq_restore(flags); > + > + kvm_guest_exit(); > + debug_ws_exit(vcpu->arch.regs.pc); > + trace_kvm_exit(vcpu->arch.regs.pc); > + } > + > + return ret; > } > > -- error compiling committee.c: too many arguments to function