On 02.05.21 08:33, Mike Rapoport wrote:
On Thu, Apr 29, 2021 at 02:25:18PM +0200, David Hildenbrand wrote:
Let's properly use page_offline_(start|end) to synchronize setting
PageOffline(), so we won't have valid page access to unplugged memory
regions from /proc/kcore.
Signed-off-by: David Hildenbrand <david@xxxxxxxxxx>
---
drivers/virtio/virtio_mem.c | 2 ++
mm/util.c | 2 ++
2 files changed, 4 insertions(+)
diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c
index 10ec60d81e84..dc2a2e2b2ff8 100644
--- a/drivers/virtio/virtio_mem.c
+++ b/drivers/virtio/virtio_mem.c
@@ -1065,6 +1065,7 @@ static int virtio_mem_memory_notifier_cb(struct notifier_block *nb,
static void virtio_mem_set_fake_offline(unsigned long pfn,
unsigned long nr_pages, bool onlined)
{
+ page_offline_begin();
for (; nr_pages--; pfn++) {
struct page *page = pfn_to_page(pfn);
@@ -1075,6 +1076,7 @@ static void virtio_mem_set_fake_offline(unsigned long pfn,
ClearPageReserved(page);
}
}
+ page_offline_end();
I'm not really familiar with ballooning and memory hotplug, but is it the
only place that needs page_offline_{begin,end} ?
Existing balloon implementations that I am aware of (Hyper-V, XEN,
virtio-balloon, vmware-balloon) usually allow reading inflated memory;
doing so might result in unnecessary overhead in the hypervisor, so we
really want to avoid it -- but it's strictly not forbidden and has been
working forever. So we barely care about races: if there would be a rare
race, we'd still be able to read that memory.
For virtio-mem, it'll be different in the future when using shmem, huge
pages, !anonymous private mappings, ... as backing storage for a VM;
there will be a virtio spec extension to document that virtio-mem
changes that indicate the new behavior won't allow reading unplugged
memory and doing so will result in undefined behavior.
--
Thanks,
David / dhildenb