On Thu, 2007-07-12 at 20:41 -0400, Steven Rostedt wrote: > > > > (Note that lguest doesn't support NMIs, but Steven has code for NMI > > support for lguest-x86-64 which could be ported across). > > Rusty, > > About that. Is there a way to get a NMI only stack in i386? In x86_64 > it's part of the TSS. So I can always know I have a good stack for the > NMI. I'm not sure i368 has the same thing. Or do we always have a good > stack whenever we are in ring0? Yeah, we always have a good stack. It might have some stuff pushed on it if we were in the middle of a switch, but that's OK. > Oh, and btw, I've just rewrote all of the Lguest64 page table handling. > I'm just going over one design change that is really bothering me. In > x86_64 we can have 2M or 4k pages (like the PSE in i386). But since 4K > pages are used by the shadow page tables, I have to map them like that. > But this means that I can have the same guest address as both a PMD and a > PTE. Which is breaking some of my code. I'm working on a fix as I write > this. This would be a good place to share with KVM, I think. (Or at least look at what they did here). > But to get you up-to-date to where I'm at. I've implemented a way to have > the HV mapped uniquely for all VCPUs. So there's a HV text section (the > same for all VCPUs), a HV VCPU Data section (readonly in all rings with > guest cr3), and a HV VCPU Scratch Pad section (read/write in rings > 0,1,and2). So now the guest kernel runs in ring 1. With this change, I > already implemented a syscall trampoline that no longer needs to switch to > the host, as well as iretq by the guest kernel goes directly to the guest > user space (or kernel). The next version of lguest64 will be much cleaner > and faster!!!! Awesome! Good to see the puppies run free... Cheers, Rusty. _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/virtualization