Re: [PATCH] target-ppc: Update slb array with correct index values.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Am 21.08.2013 um 10:25 schrieb Paul Mackerras <paulus@xxxxxxxxx>:

> On Wed, Aug 21, 2013 at 08:37:47AM +0100, Alexander Graf wrote:
>> 
>> On 21.08.2013, at 06:11, Paul Mackerras wrote:
>> 
>>> On Mon, Aug 19, 2013 at 10:21:09AM +0200, Alexander Graf wrote:
>>>> 
>>>> On 19.08.2013, at 09:25, Aneesh Kumar K.V wrote:
>>>>> I noticed this when adding support for guest memory dumping via qemu gdb
>>>>> server. Now the array we get would look like below
>>>>> 
>>>>> slbe0 slbv0
>>>>> slbe1 slbv1
>>>>> 0000  00000
>>>>> 0000  00000
>>>> 
>>>> Ok, so that's where the problem lies. Why are the entries 0 here? Either we try to fetch more entries than we should, we populate entries incorrectly or the kernel simply returns invalid SLB entry values for invalid entries.
>>>> 
>>>> Are you seeing this with PR KVM or HV KVM?
>>> 
>>> I suspect this is to do with the fact that PR and HV KVM use the
>>> vcpu->arch.slb[] array differently.  PR stores SLB entry n in
>>> vcpu->arch.slb[n], whereas HV packs the valid entries down in the
>>> low-numbered entries and puts the index in the bottom bits of the esid
>>> field (this is so they can be loaded efficiently with the slbmte
>>> instruction on guest entry).
>>> 
>>> Then, kvm_arch_vcpu_ioctl_get_sregs() on PR copies out all 64 entries
>>> (valid or not) and puts an index value in the bottom bits of the esid,
>>> whereas on HV it just copies out the valid entries (which already have
>>> the index in the esid field).
>>> 
>>> So, the question is, what is the ABI here?  It sounds a bit like qemu
>>> is ignoring the index value in the esid field.  Either qemu needs to
>>> take notice of the index in the esid field or we need to change the HV
>>> versions of kvm_arch_vcpu_ioctl_get/set_sregs to put entry n in
>>> sregs->u.s.ppc64.slb[n] like PR does.
>> 
>> It's the opposite today - QEMU does honor the index value on sregs get. Aneesh's patch wants to change it to ignore it instead. For sregs set we copy our internal copy of the slb linearly into the array, so we don't pack it there.
>> 
>> Can we safely assume on HV KVM that esid == 0 vsid == 0 is the end of the list? If so, we can just add a break statement in the get loop and call it a day. The rest should work just fine.
> 
> On HV KVM yes, that would be the end of the list, but PR KVM could
> give you entry 0 containing esid==0 and vsid==0 followed by valid
> entries.  Perhaps the best approach is to ignore any entries with
> SLB_ESID_V clear.

That means we don't clear entries we don't receive from the kernel because they're V=0 but which were V=1 before. Which with the current code is probably already broken.

So yes, clear all cached entries first (to make sure we have no stale ones), then loop through all and only add entries with V=1 should fix everything for PR as well as HV.


Alex

> 
> Paul.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux