On Tue, Nov 10, 2020 at 1:19 PM Marc Zyngier <maz@xxxxxxxxxx> wrote: > > Why? I thought we were trying to kill nVHE off now that newer CPUs > > provide the saner virtualization extensions? > > We can't kill nVHE at all, because that is the only game in town. > You can't even buy a decent machine with VHE, no matter how much money > you put on the table. As I mentioned it earlier, we did this type of nVHE hypervisor and the proof of concept is here: https://github.com/jkrh/kvms See the README. It runs successfully on multiple pieces of arm64 hardware and provides a tiny QEMU based development environment via the makefiles for the QEMU 'max' CPU. The code is rough, the amount of man hours put to it is not sky high, but it does run. I'll update a new kernel patch to patches/ dir for one of the later kernels hopefully next week, up to now we have only supported kernels between 4.9 .... 5.6 as this is what our development hardware(s) run with. It requires a handful of hooks in the kvm code, but the actual kvm calls are just rerouted back to the kernel symbols. This way the hypervisor itself can be kept very tiny. The s2 page tables are fully owned by the hyp and the guests are unmapped from the host memory when configured with the option (we call it host blinding). Multiple VMs can be run without pinning them into the memory. It also provides a tiny out of tree driver prototype stub to protect the critical sections of the kernel memory beyond the kernel's own reach. There are still holes in the implementation such as the virtio-mapback handling via whitelisting and paging integrity checks, and many things are not quite all the way there yet. One step at a time. -- Janne _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm