On Tue, Sep 21, 2010, Chris Wright wrote about "Re: KVM call minutes for Sept 21": > People keep looking for reasons to justify the cost of the effort, dunno > why "because it's cool" isn't good enough ;) At any rate, that was mainly > a question of how it might be useful for production kind of environments. I gave in my previous mail a long list of examples what you might do with nested virtualization, and many of them could be called "production kind of enviroments". Let me give you one small example that I recently encountered, although by no means do I think this is the best example, nor the most important one. One of my colleagues wanted to run tests on a particular software product. Following the recent virtualization trend, he didn't buy an physical test machine, but rather rented a virtual machine on an internal compute-cloud service similar in spirit to Amazon's EC2, and ran his test on this virtual machine. The problem he then faced was that he actually hoped he could run his test on several different operating systems - e.g., several versions of Linux and Windows. No problem - he would just start multiple virtual machines - either concurrently or in series - each from a different image and running a different OS. But there was a big cost problem: Like Amazon's service, this service also charged by full hours (if you use 10 minutes, you are charged for a full hour), and worse - had a virtual-machine start/destroy cost. So if his test needed to run for 10 minutes on Windows XP, then 10 minutes on Windows 7, then 10 minutes on Linux, he would pay 3 times more than he would to get one virtual machine for the full hour. Moreover, he would need software to automate all this succession of virtual machine startups and stops. What he could have used is nested virtualization: He could get one virtual machine for 30 minutes, and run on it a nested hypervisor and in it his own 3 virtual machines, the two windows and one Linux. Moreover, he would have one image that contains this internal setup, making it easy to start and stop this entire test setup anytime, anywhere. In essence, nested virtulization will allow him to easily and cheaply sub-divide and organize the one virtual machine he is renting - exactly like virtualization allowed doing the same on one physical machine. Again, this is just an example need that I encountered last week from an actual user of a real cloud service. By no means do I think this is the only example, the best example, or the example that gives the most business value. > If there are remaining issues that could be done by someone else, this > might be helpful. Otherwise, probably only useful to you ;) In theory (if we have a public git repository to track this), there is no reason not to divide the remaining issue between people. For example, one person change fix the IDT code that bothered Gleb, while another person reorders the vmcs12 structure as requested in another review, and a third person writes tests. All we'd need a repository to work on the code together. KVM's main repository would of course be best, which is why I'm hoping to get these patches checked-in, rather than continue to work separately like we have been doing. > - has long term maintenance issues > > And that means that there's two halves to the feature. One is the nested > VMX code itself, for example each of new the EXIT_REASON_VM* handlers. > Other is glue to rest of KVM, for example, interrupt injection done > optimally. Both have long term maintenance issues, but adding complexity > to core KVM was the context here. I believe that in the current state of the code, nested VMX adds little complexity to the non-nested code - just a few if's. Of course, it also adds a lot of new code, but none of this code gets run in the non-nested case. The maintenance issues I see are the other way around - i.e., once in a while when non-nested changes are made to KVM, nested stops working and needs to be fixed. A prime example of this was the lazy FPU loading added in the beginning of the year, which broke our assumption that L0's CR0_GUEST_HOST_MASK always has all its bits on, making nested stop working until I fixed it (it wasn't easy debugging these problems ;-)). I wholeheartedly agree that if nobody continues to maintain nested VMX, it can and will become "stale" and may stop working after unrelated code in KVM is modified. Adding tests can help here (so that when someone modifies some non-nested KVM feature he will at least know that he broke nested), but definitely, we'll need to continue having someone who is interested in keeping the nested VMX working. In the forseeable future, I'll volunteer to be that someone. Nadav. -- Nadav Har'El | Wednesday, Sep 22 2010, 15 Tishri 5771 nyh@xxxxxxxxxxxxxxxxxxx |----------------------------------------- Phone +972-523-790466, ICQ 13349191 |All those who believe in psychokinesis, http://nadav.harel.org.il |raise my hand. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html