Hi, I'm experimenting with the kvm in order to see how it would work in co-existence with a tiny external hypervisor that also runs the host in el1/vmid 0. More about this later on in case it turns out to be anything generally useful, but I've been stuck for a few days now understanding the kvm stage-2 (ipa-to-phys) mapping when the guest is being created. Things I think I've understood so far; - qemu mmaps the guest memory per the machine type (virt in my case) - qemu pushes the machine physical memory model in the kernel through the kvm_vm_ioctl_set_memory_region() - kvm has mmu notifier block set to listen to the changes to these regions and it becomes active after the machine memory model arrives. The mmu notifier calls handle_hva_to_gpa() that dispatches the call to the appropriate map or unmap handler and these do the s2 mapping changes for the vm as needed - prior to starting the vm, kvm_arch_prepare_memory_region() is given a try to see if any IO areas could be s2 mapped before the host is allowed to execute. This is mostly an optimization? - vcpu is started - as the pages are touched when the vcpu starts executing, page faults get generated and the real s2 mappings slowly start to get created. LRU keeps the active pages pinned in memory, others will get evicted and their s2 mapping eventually disappears - all in all, the vm runs and behaves pretty much like a normal userspace process Is this roughly the story? If it is, I'm a bit lost where the stage2 page fault handler that is supposed to generate the s2 mappings is. It was surprisingly easy to get the external hypervisor (with very minimal changes to the kvm) to the point when the guest is being entered and the vmid 1 starts to refer to the instructions at the vm ram base (0x4000 0000 for virt). Those, of course, currently scream bloody murder as the s2 mapping does not exist. -- Janne _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm