Re: [PATCH v2 3/3] KVM: PPC: Book3S: MMIO emulation support for little endian guests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/04/2013 12:44 PM, Alexander Graf wrote:
> 
> On 10.10.2013, at 12:16, Paul Mackerras <paulus@xxxxxxxxx> wrote:
> 
>> On Wed, Oct 09, 2013 at 10:29:53AM +0200, Alexander Graf wrote:
>>>
>>>
>>> Am 09.10.2013 um 07:59 schrieb Paul Mackerras <paulus@xxxxxxxxx>:
>>>
>>>> On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
>>>>>
>>>>>
>>>>> Am 09.10.2013 um 01:31 schrieb Paul Mackerras <paulus@xxxxxxxxx>:
>>>>>
>>>>>> True, until we get to POWER8 with its split little-endian support,
>>>>>> where instructions and data can have different endianness...
>>>>>
>>>>> How exactly does that work?
>>>>
>>>> They added an extra MSR bit called SLE which enables the split-endian
>>>> mode.  It's bit 5 (IBM numbering).  For backwards compatibility, the
>>>> LE bit controls instruction endianness, and data endianness depends on
>>>> LE ^ SLE, that is, with SLE = 0 things work as before.  With SLE=1 and
>>>> LE=0 you get little-endian data and big-endian instructions, and vice
>>>> versa with SLE=1 and LE=1.
>>>
>>> So ld32 should only honor LE and get_last_inst only looks at SLE and swaps even the vcpu cached version if it's set, no?
>>
>> Not exactly; instruction endianness depends only on MSR[LE], so
>> get_last_inst should not look at MSR[SLE].  I would think the vcpu
>> cached version should be host endian always.
> 
> I agree. It makes the code flow easier.


To take into account the host endian order to determine if we should 
byteswap, we could modify kvmppc_need_byteswap() as follow :


+/*
+ * Compare endian order of host and guest to determine whether we need
+ * to byteswap or not
+ */
 static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
 {
-       return vcpu->arch.shared->msr & MSR_LE;
+       return ((mfmsr() & (MSR_LE)) >> MSR_LE_LG) ^
+               ((vcpu->arch.shared->msr & (MSR_LE)) >> MSR_LE_LG);
 }



and I think MSR[SLE] could be handled this way :


 static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
@@ -284,10 +289,19 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu, ulong *eaddr,
                              u32 *ptr, bool data)
 {
        int ret;
+       bool byteswap;
 
        ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
 
-       if (kvmppc_need_byteswap(vcpu))
+       byteswap = kvmppc_need_byteswap(vcpu);
+
+       /* if we are loading data from a guest which is in Split
+        * Little Endian mode, the byte order is reversed 
+        */
+       if (data && (vcpu->arch.shared->msr & MSR_SLE))
+               byteswap = !byteswap;
+
+       if (byteswap)
                *ptr = swab32(*ptr);
 
        return ret;


How does that look ? 

This is not tested and the MSR_SLE definition is missing. I will fix that in v5.

Thanks,

C.

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM Development]     [KVM ARM]     [KVM ia64]     [Linux Virtualization]     [Linux USB Devel]     [Linux Video]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Big List of Linux Books]

  Powered by Linux