On Mon, Aug 19, 2013 at 10:21:09AM +0200, Alexander Graf wrote: > > On 19.08.2013, at 09:25, Aneesh Kumar K.V wrote: > > I noticed this when adding support for guest memory dumping via qemu gdb > > server. Now the array we get would look like below > > > > slbe0 slbv0 > > slbe1 slbv1 > > 0000 00000 > > 0000 00000 > > Ok, so that's where the problem lies. Why are the entries 0 here? Either we try to fetch more entries than we should, we populate entries incorrectly or the kernel simply returns invalid SLB entry values for invalid entries. > > Are you seeing this with PR KVM or HV KVM? I suspect this is to do with the fact that PR and HV KVM use the vcpu->arch.slb[] array differently. PR stores SLB entry n in vcpu->arch.slb[n], whereas HV packs the valid entries down in the low-numbered entries and puts the index in the bottom bits of the esid field (this is so they can be loaded efficiently with the slbmte instruction on guest entry). Then, kvm_arch_vcpu_ioctl_get_sregs() on PR copies out all 64 entries (valid or not) and puts an index value in the bottom bits of the esid, whereas on HV it just copies out the valid entries (which already have the index in the esid field). So, the question is, what is the ABI here? It sounds a bit like qemu is ignoring the index value in the esid field. Either qemu needs to take notice of the index in the esid field or we need to change the HV versions of kvm_arch_vcpu_ioctl_get/set_sregs to put entry n in sregs->u.s.ppc64.slb[n] like PR does. Paul. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html