Re: [LSF/MM/BPF TOPIC] BoF VM live migration over CXL memory​

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi James, sorry looks like I missed your email...


On 4/12/23 10:15, James Bottomley wrote:
On Wed, 2023-04-12 at 10:38 +0200, David Hildenbrand wrote:
On 12.04.23 04:54, Huang, Ying wrote:
Gregory Price <gregory.price@xxxxxxxxxxxx> writes:
[...]
That feels like a hack/bodge rather than a proper solution to me.

Maybe this is an affirmative argument for the creation of an
EXMEM zone.

Let's start with requirements.  What is the requirements for a new
zone type?

I'm stills scratching my head regarding this. I keep hearing all
different kind of statements that just add more confusions "we want
it to be hotunpluggable" "we want to allow for long-term pinning
memory" "but we still want it to be movable" "we want to place some
unmovable allocations on it". Huh?

This is the essential question about CXL memory itself: what would its
killer app be?  The CXL people (or at least the ones I've talked to)
don't exactly know.


I hope it's not something I've said, I'm not claiming VM migration or hypervisor clustering is the killer app for CXL. I would never claim that. And I'm not one of the CXL folks. You can chuck me into the "CXL enthusiasts" bucket.... For a bit of context, I'm one of the co-authors/architects of VMware's clustered filesystem[1] and I've worked on live VM migration as far back as 2003 on the original ESX server. Back in the day, we introduced the concept of VM live migration into the x86 data-center parlance with a combination of a process monitor and a clustered filesystem. The basic mechanism we put forward at the time was: pre-copy, quiesce, post-copy, un-quiesce. And I think most hypervisor after which added live migration are using loosely the same basic principles, iirc xen introduced LM 4 years later in 2007 and KVM about the same time or perhaps a year later. Anyway, the point that I am trying to get to is, it bugged me 20 years ago that we quiesced, and it bugs me today :) I think 20 years ago, quiescing was an acceptable compromise because we couldn't solve it technologically. Maybe 20-25 years later, we've reached a point we can solve it technologically. I don't know, but the problem interests me enough to try.


 Within IBM I've seen lots of ideas but no actual
concrete applications.  Given the rates at which memory density in
systems is increasing, I'm a bit dubious of the extensible system pool
argument.   Providing extensible memory to VMs sounds a bit more
plausible, particularly as it solves a big part of the local overcommit
problem (although you still have a global one).  I'm not really sure I
buy the VM migration use case: iterative transfer works fine with small
down times so transferring memory seems to be the least of problems
with the VM migration use case

We do approximately 2.5 Million live migrations per year. Some migrations take less than a second, some take roughly a second, and others on very noisy VMs can take several seconds. Whatever that average is, let's say 1 second per live migration, that's cumulatively roughly 28 days of steal lost to migration per year. As you probably know, live migrations are essential for de-fragmenting hypervisors/de-stranding resources and from my perspective, I'd like to see them happen more often with a smaller customer impact.


(it's mostly about problems with attached devices).

That is purely virtualization load type dependent. Maybe for the cloud you're running devices are a problem(I'm guessing here). For us this is a non existent problem. We serve approximately 600,000 customers and don't do forms of pass-through so it's literally a non issue. What I am starting to tackle with nil-migration is to be able to migrate live and executing memory, instead of frozen memory. Which should especially help with noisy VMs, and in my experience customers of noisy VMs are more likely to notice steal and complain about steal. I understand everyone has their own workloads, and the devices problem will be solved in it's own right, but it's out of scope for what I am tackling with nil-migration. My main focus at this time is memory and context migration.


<  CXL 3.0 is adding sharing primitives for memory so
now we have to ask if there are any multi-node shared memory use cases
for this, but most of us have already been burned by multi-node shared
clusters once in our career and are a bit leery of a second go around.

Chatting with you at the last LPC, and judging by the combined gray hair between us, I'll venture to guess we've both fallen off the proverbial bike, many times. It's never stopped me from getting back on. Issue interest me enough to try.

If you don't mind me asking, what clustering did you work on? Maybe I am familiar with it



Is there a use case I left out (or needs expanding)?

James




[1]. https://en.wikipedia.org/wiki/VMware_VMFS

--
Peace can only come as a natural consequence
of universal enlightenment -Dr. Nikola Tesla




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux