* Peter Maydell (peter.maydell@xxxxxxxxxx) wrote: > On Mon, 11 Jul 2022 at 14:24, Dr. David Alan Gilbert > <dgilbert@xxxxxxxxxx> wrote: > > But, ignoring postcopy for a minute, with KVM how do different types of > > backing memory work - e.g. if I back a region of guest memory with > > /dev/shm/something or a hugepage equivalent, where does the MTE memory > > come from, and how do you set it? > > Generally in an MTE system anything that's "plain old RAM" is expected > to support tags. (The architecture manual calls this "conventional > memory". This isn't quite the same as "anything that looks RAM-like", > e.g. the graphics card framebuffer doesn't have to support tags!) I guess things like non-volatile disks mapped as DAX are fun edge cases. > One plausible implementation is that the firmware and memory controller > are in cahoots and arrange that the appropriate fraction of the DRAM is > reserved for holding tags (and inaccessible as normal RAM even by the OS); > but where the tags are stored is entirely impdef and an implementation > could choose to put the tags in their own entirely separate storage if > it liked. The only way to access the tag storage is via the instructions > for getting and setting tags. Hmm OK; In postcopy, at the moment, the call qemu uses is a call that atomically places a page of data in memory and then tells the vCPUs to continue. I guess a variant that took an extra blob of MTE data would do. Note that other VMMs built on kvm work in different ways; the other common way is to write into the backing file (i.e. the /dev/shm whatever atomically somehow) and then do the userfault call to tell the vcpus to continue. It looks like this is the way things will work in the split hugepage mechanism Google are currently adding. Dave > -- PMM > -- Dr. David Alan Gilbert / dgilbert@xxxxxxxxxx / Manchester, UK