Re: [PATCH V3 2/2] kvm tools: Create arch-specific kvm_cpu__emulate_{mm}io()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey Alex,

On 24/12/11 00:39, Alexander Graf wrote:
> 
> On 23.12.2011, at 14:26, Matt Evans wrote:
> 
>>
>> On 23/12/2011, at 11:58 PM, Alexander Graf wrote:
>>
>>>
>>> On 13.12.2011, at 07:21, Matt Evans wrote:
>>>
>>>> Different architectures will deal with MMIO exits differently.  For example,
>>>> KVM_EXIT_IO is x86-specific, and I/O cycles are often synthesised by steering
>>>> into windows in PCI bridges on other architectures.
>>>>
>>>> This patch calls arch-specific kvm_cpu__emulate_io() and kvm_cpu__emulate_mmio()
>>>> from the main runloop's IO and MMIO exit handlers.  For x86, these directly
>>>> call kvm__emulate_io() and kvm__emulate_mmio() but other architectures will
>>>> perform some address munging before passing on the call.
>>>
>>> Why do you need address munging? PIO is simply not there and MMIO always goes to the physical address the CPU sees, so I don't see what you want to munge. The way the memory bus is attached to the CPU should certainly not be modeled differently for PPC and x86.
>>
>> PIO not there?  PIO is used heavily in kvmtool.  So, I made a window in a similar way to how a real PHB has PIO-window-in-MMIO.
>>
>> PCI BARs are currently 32-bit.  I don't want to limit the guest RAM to <4G
>> nor puncture holes in it just to make it look like x86... PCI bus addresses
>> == CPU addresses is a bit of an x86ism.  So, I just used another PHB window
>> to offset 32bit PCI MMIO up somewhere else.  We can then use all 4G of PCI
>> MMIO space without putting that at addr 0 and RAM starting >4G.  (And then,
>> exception vectors where?)
>>
>> The PCI/BARs/MMIO code could really support 64bit addresses though that's a
>> bit of an orthogonal bit of work.  Why should PPC have an MMIO hole in the
>> middle of RAM?
> 

Sooo.. call it post-holiday bliss but I don't understand what you're saying
here. :)

> I fully agree with what you're saying, but the layering seems off. If the CPU
> gets an MMIO request, it gets that on a physical address from the view of the
  ^^^^ produces?

> CPU. Why would you want to have manual munging there to get to whatever window
> you have? Just map the MMIO regions to the higher addresses and expose
> whatever different representation you have to the device, not to the CPU
> layer.

What do you mean here by "map" and representation?  The only way I can parse
this is as though you're describing PCI devices seeing PCI bus addresses which
CPU MMIOs are converted to by the window offset, i.e. what already exists
i.e. what you're disagreeing with :-)

Sorry.. please explain some more.  Is your suggestion to make CPU phys addresses
and PCI bus addresses 1:1?  (Hole in RAM..)


Thanks!


Matt
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM Development]     [KVM ARM]     [KVM ia64]     [Linux Virtualization]     [Linux USB Devel]     [Linux Video]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Big List of Linux Books]

  Powered by Linux