Re: [PATCH 1/2] kvm/e500v2: Remove shadow tlb

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10.09.2010, at 01:39, Hollis Blanchard wrote:

> On 09/09/2010 04:26 PM, Alexander Graf wrote:
>> On 09.09.2010, at 20:13, Hollis Blanchard wrote:
>>   
>>> On 09/09/2010 04:16 AM, Liu Yu-B13201 wrote:
>>>     
>>>> Yes, it's hard to resume TLB0. We only resume TLB1 in previous code.
>>>> But TLB1 is even more smaller (13 free entries) than 440,
>>>> So that it still has little possibility to get hit.
>>>> thus the resumption is useless.
>>>> 
>>>>       
>>> The only reason hits are unlikely in TLB1 is because you still don't have large page support in the host. Once you have that, you can use TLB1 for large guest mappings, and it will become extremely likely that you get hits in TLB1. This is true even if the guest wants 256MB but the host supports only e.g. 16MB large pages, and must split the guest mapping into multiple large host pages.
>>> 
>>> When will you have hugetlbfs for e500? That's going to make such a dramatic difference, I'm not sure it's worth investing time in optimizing the MMU code until then.
>>>     
>> I'm not sure I agree. Sure, huge pages give another big win, but the state as is should at least be fast enough for prototyping.
>>   
> Sure, and it sounds like you can prototype with it already. My point is that, in your 80-20 rule of optimization, the 20% is going to change radically once large page support is in place.
> 
> Remember that the guest kernel is mapped with just a couple large pages. During guest Linux boot on 440, I think about half the boot time is spent TLB thrashing in the initcalls. Using TLB0 can ameliorate that for now, but why bother, since it doesn't help you towards the real solution?
> 
> I'm not saying this shouldn't be committed, if that's how you interpreted my comments, but in my opinion there are more useful things to do than continuing to optimize a path that is going to disappear in the future. Once you *do* have hugetlbfs in the host, you're not going to want to use TLB0 for guest TLB1 mappings any more anyways.

That depends on the use cases. As long as there are no transparent huge pages available, not using hugetlbfs gives you a lot of benefit:

  - ksm
  - swapping
  - lazy allocation

So while I agree that supporting huge pages is crucial to high performance kvm, I'm not convinced it's the only path to optimize for. Look at x86 - few people actually use hugetlbfs there.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux