On 06/10/2011 07:05 AM, Xiao Guangrong wrote:
On 06/09/2011 03:39 PM, Avi Kivity wrote: > First, I think we should consider dropping bypass_guest_pf completely, just so we have less things to think about. > I agree.
Great, please post a patch.
> I'm also not sure RCU is enough protection - we can unlink a page in the middle of a hierarchy, I think it is ok, it just likes the page structure cache of real CPU, we can use the old mapping or new mapping here, if we missed, page fault path is called, it can fix the problem for us. > and on i386 this causes an invalid pointer to appear when we fetch the two halves. But I guess, if the cpu can do it, so can we. > Ah, maybe the cpu can not do it, we need a light way to get spte for i386 host...
Look at the comments in arch/x86/mm/gup.c - it does the same thing.
> Maybe we can do something like > > again: > fetch pointer to last level spte using RCU > if failed: > take lock > build spte hierarchy > drop lock > goto again > if sync: > if mmio: > do mmio > return > return > walk guest table > install spte > if mmio: > do mmio > > (sync is always false for tdp) > It seams it is more complex,
It also doesn't work - we have to set up rmap under lock.
the origin way is: fetch last level spte if failed or it is not a mmio spte: call page fault do mmio and it has little heavy sine we need to walk guest page table, and build spte under mmu-lock.
For shadow, yes, this is a good optimization. But with nested paging it slow things down. We already have the gpa, so all we need to do is follow the mmio path. There's no need to walk the spte hierarchy.
Maybe i missed your meaning, could you please tell me the advantage? :-(
I wanted to also service RAM faults without the lock, if the only thing missing was the spte (and the rest of the hierarchy was fine). But it can't be made to work without an overhaul of all of the locking.
-- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html