> -----Original Message----- > From: Hollis Blanchard [mailto:hollisb@xxxxxxxxxx] > Sent: Tuesday, September 09, 2008 1:31 AM > To: Liu Yu-B13201 > Cc: kvm-ppc@xxxxxxxxxxxxxxx > Subject: RE: [patch 0/4] add e500 platform support for KVM > > On Sat, 2008-08-30 at 11:15 +0800, Liu Yu wrote: > > > > Unlike 44x use TID=0 map userspace and TID=1 map kernel space, > > > > I plan to use host TLB1 to map kernel, and host TLB0 to map > > > userspace. > > > > This category can make it convient to handle privilege > > > switch: that is > > > > before enterring guest just need tlbivax TLB1. > > > > Userspace tlb enties with differnet TID can exist in host > > > TLB0 till host > > > > kvm process switch or guest meet explicit tlbiax command. > > > > > > > > I just have this idea and have not thought all the > detail through. > > > > What do you think of it? > > > > > > You might run into problems if you ever get large guest userspace > > > mappings. I know hugetlbfs doesn't exist for e500 Linux > right now, but > > > it could in the future, and plus there are other kernels > to consider. > > > > Can you give me more details? > > I'm not sure how hugetlbfs could be in the future, but TLB0 > has a fixed > > mapping size 4KB. > > You can't assume that TLB1 does not contain user mappings, because > that's not true with hugetlbfs. Of course, hugetlbfs doesn't (yet?) > exist for e500, so the assumption is valid until that happens. > > However, we *really* need large host page mappings to make KVM fast. > Right now we have to split guest large pages (covering the > kernel) into > lots of 4K mappings, which means our TLB miss rate is *much* > higher than > if we could use hugetlbfs on the host. In that case, we could use > hugetlbfs large user pages to back the guest kernel mappings. > Yes. It's a problem. But I afraid e500 would not use hugetlbfs, as it means to give up 512-entry TLB0, while TLB1 has only 16 entries. -- To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html