Re: [PATCH 00/18] KVM: PPC: Virtualize Gekko guests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Avi Kivity wrote:
> On 02/07/2010 05:49 PM, Alexander Graf wrote:
>> Am 07.02.2010 um 13:54 schrieb Avi Kivity <avi@xxxxxxxxxx>:
>>
>>> On 02/04/2010 05:55 PM, Alexander Graf wrote:
>>>> In an effort to get KVM on PPC more useful for other userspace
>>>> users than
>>>> Qemu, I figured it'd be a nice idea to implement virtualization of the
>>>> Gekko CPU.
>>>>
>>>> The Gekko is the CPU used in the GameCube. In a slightly more modern
>>>> fashion it lives on in the Wii today.
>>>>
>>>> Using this patch set and a modified version of Dolphin, I was able to
>>>> virtualize simple GameCube demos on a 970MP system.
>>>>
>>>> As always, while getting this to run I stumbled across several broken
>>>> parts and fixed them as they came up. So expect some bug fixes in this
>>>> patch set too.
>>>>
>>>
>>> This is halfway into emulation rather than virtualization.  What
>>> does performance look like when running fpu intensive applications?
>>
>> It is for the FPU. It is not for whatever runs on the CPU.
>>
>> I haven't benchmarked things so far,
>>
>> The only two choices I have to get this running is in-kernel
>> emulation or userspace emulation. According to how x86 deals with
>> things I suppose full state transition to userspace and continuing
>> emulation there isn't considered a good idea. So I went with in-kernel.
>
> It's not a good idea for the kernel either, if it happens all the
> time.  If a typical Gekko application uses the fpu and the emulated
> instructions intensively, performance will suck badly (as in: qemu/tcg
> will be faster).
>

Yeah, I haven't really gotten far enough to run full-blown guests yet.
So far I'm on demos and they look pretty good.

But as far as intercept speed goes - I just tried running this little
piece of code in kvmctl:

.global _start
_start:
    li    r3, 42
    mtsprg    0, r3
    mfsprg    r4, 0
    b    _start

and measured the amount of exits I get on my test machine:

processor    : 0
cpu        : PPC970MP, altivec supported
clock        : 2500.000000MHz
revision    : 1.1 (pvr 0044 0101)

--->

exits      1811108

I have no idea how we manage to get that many exits, but apparently we
are. So I'm less concerned about the speed of the FPU rerouting at the
moment.

If it really gets unusably slow, I'd rather binary patch the guest on
the fly in KVM according to rules set by the userspace client. But we'll
get there when it turns out to be too slow. For now I'd rather like to
have something working at all and then improve speed :-).

Alex
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux