Re: [PATCH] target-ppc: Update slb array with correct index values.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Alexander Graf <agraf@xxxxxxx> writes:

> On 19.08.2013, at 09:25, Aneesh Kumar K.V wrote:
>
>> Alexander Graf <agraf@xxxxxxx> writes:
>> 
>>> On 11.08.2013, at 20:16, Aneesh Kumar K.V wrote:
>>> 
>>>> From: "Aneesh Kumar K.V" <aneesh.kumar@xxxxxxxxxxxxxxxxxx>
>>>> 
>>>> Without this, a value of rb=0 and rs=0, result in us replacing the 0th index
>>>> 
>>>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxxxxxxx>
>>> 
>>> Wrong mailing list again ;).
>> 
>> Will post the series again with updated commit message to the qemu list.
>> 
>>> 
>>>> ---
>>>> target-ppc/kvm.c | 14 ++++++++++++--
>>>> 1 file changed, 12 insertions(+), 2 deletions(-)
>>>> 
>>>> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
>>>> index 30a870e..5d4e613 100644
>>>> --- a/target-ppc/kvm.c
>>>> +++ b/target-ppc/kvm.c
>>>> @@ -1034,8 +1034,18 @@ int kvm_arch_get_registers(CPUState *cs)
>>>>        /* Sync SLB */
>>>> #ifdef TARGET_PPC64
>>>>        for (i = 0; i < 64; i++) {
>>>> -            ppc_store_slb(env, sregs.u.s.ppc64.slb[i].slbe,
>>>> -                               sregs.u.s.ppc64.slb[i].slbv);
>>>> +            target_ulong rb  = sregs.u.s.ppc64.slb[i].slbe;
>>>> +            /*
>>>> +             * KVM_GET_SREGS doesn't retun slb entry with slot information
>>>> +             * same as index. So don't depend on the slot information in
>>>> +             * the returned value.
>>> 
>>> This is the generating code in book3s_pr.c:
>>> 
>>>        if (vcpu->arch.hflags & BOOK3S_HFLAG_SLB) {
>>>                for (i = 0; i < 64; i++) {
>>>                        sregs->u.s.ppc64.slb[i].slbe = vcpu->arch.slb[i].orige | i;
>>>                        sregs->u.s.ppc64.slb[i].slbv = vcpu->arch.slb[i].origv;
>>>                }
>>> 
>>> Where exactly did you see broken slbe entries?
>>> 
>> 
>> I noticed this when adding support for guest memory dumping via qemu gdb
>> server. Now the array we get would look like below
>> 
>> slbe0 slbv0
>> slbe1 slbv1
>> 0000  00000
>> 0000  00000
>
> Ok, so that's where the problem lies. Why are the entries 0 here?
> Either we try to fetch more entries than we should, we populate
> entries incorrectly or the kernel simply returns invalid SLB entry
> values for invalid entries.


The ioctl zero out the sregs, and fill only slb_max entries. So we find
0 filled entries above slb_max. Also we don't pass slb_max to user
space. So userspace have to look at all the 64 entries. 


>
> Are you seeing this with PR KVM or HV KVM?
>

HV KVM

-aneesh

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux