On 11/18/2010 01:42 AM, Anthony Liguori wrote:
Gack. For the benefit of those that want to join the fun without
digging up the spec, these magic flippable segments the i440fx can
toggle are 12 fixed 16k segments from 0xc0000 to 0xeffff and a single
64k segment from 0xf0000 to 0xfffff. There are read-enable and
write-enable bits for each, so the chipset can be configured to read
from the bios and write to memory (to setup BIOS-RAM caching), and read
from memory and write to the bios (to enable BIOS-RAM caching). The
other bit combinations are also available.
Yup. As Gleb mentions, there's the SDRAM register which controls
whether 0xa0000 is mapped to PCI or whether it's mapped to RAM (but
KVM explicitly disabled SMM support).
KVM not supporting SMM is a bug (albeit one that is likely to remain
unresolved for a while). Let's pretend that kvm smm support is not an
issue.
IIUC, SMM means that there two memory maps when the cpu accesses memory,
one for SMM, one for non-SMM.
For my purpose in using this to program the IOMMU with guest physical to
host virtual addresses for device assignment, it doesn't really matter
since there should never be a DMA in this range of memory. But for a
general RAM API, I'm not sure either. I'm tempted to say that while
this is in fact a use of RAM, the RAM is never presented to the guest as
usable system memory (E820_RAM for x86), and should therefore be
excluded from the RAM API if we're using it only to track regions that
are actual guest usable physical memory.
We had talked on irc that pc.c should be registering 0x0 to
below_4g_mem_size as ram, but now I tend to disagree with that. The
memory backing 0xa0000-0x100000 is present, but it's not presented to
the guest as usable RAM. What's your strict definition of what the RAM
API includes? Is it only what the guest could consider usable RAM or
does it also include quirky chipset accelerator features like this
(everything with a guest physical address)? Thanks,
Today we model on flat space that's a mixed of device memory, RAM, or
ROM. This is not how machines work and the limitations of this model
is holding us back.
IRL, there's a block of RAM that's connected to a memory controller.
The CPU is also connected to the memory controller. Devices are
connected to another controller which is in turn connected to the
memory controller. There may, in fact, be more than one controller
between a device and the memory controller.
A controller may change the way a device sees memory in arbitrary
ways. In fact, two controllers accessing the same page might see
something totally different.
The idea behind the RAM API is to begin to establish this hierarchy.
RAM is not what any particular device sees--it's actual RAM. IOW, the
RAM API should represent what address mapping I would get if I talked
directly to DIMMs.
This is not what RamBlock is even though the name would suggest
otherwise. RamBlocks are anything that qemu represents as cache
consistency directly accessable memory. Device ROMs and areas of
device RAM are all allocated from the RamBlock space.
So the very first task of a RAM API is to simplify differentiate these
two things. Once we have the base RAM API, we can start adding the
proper APIs that sit on top of it (like a PCI memory API).
Things aren't that bad - a ram_addr_t and a physical address are already
different things, so we already have one level of translation.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html