On 06/28/2012 06:45 AM, Takuya Yoshikawa wrote: > On Thu, 28 Jun 2012 11:12:51 +0800 > Xiao Guangrong <xiaoguangrong@xxxxxxxxxxxxxxxxxx> wrote: > >> > struct kvm_arch_memory_slot { >> > + unsigned long *rmap_pde[KVM_NR_PAGE_SIZES - 1]; >> > struct kvm_lpage_info *lpage_info[KVM_NR_PAGE_SIZES - 1]; >> > }; >> > >> >> It looks little complex than before - need manage more alloc-ed/freed buffers. > > Actually I want to integrate rmap and rmap_pde in the future: > > rmap[KVM_NR_PAGE_SIZES] That's a good direction. > > For this we need to modify some unrelated ppc code, so I just > avoided the integration in this series. > > Note: write_count: 4 bytes, rmap_pde: 8 bytes. So we are wasting > extra paddings by packing them into lpage_info. The wastage is quite low since it's just 4 bytes per 2MB. > >> Why not just introduce a function to get the next rmap? Something like this: > > I want to eliminate this kind of overheads. I don't think the overhead is significant. rmap walk speed is largely a function of cache misses IMO, and we may even be adding cache misses by splitting lpage_info. But I still think it's the right thing since it simplifies the code. Maybe we should add a prefetch() on write_count do mitigate the overhead, if it starts showing up in profiles. -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html