On Fri, Nov 05, 2021 at 06:17:11PM +0100, Janis Schoetterl-Glausch wrote: > On 11/4/21 23:45, David Matlack wrote: > > [...] > > > > The last alternative is to perform dirty tracking at a 2M granularity. > > This would reduce the amount of splitting work required by 512x, > > making the current approach of splitting on fault less impactful to > > customer performance. We are in the early stages of investigating 2M > > dirty tracking internally but it will be a while before it is proven > > and ready for production. Furthermore there may be scenarios where > > dirty tracking at 4K would be preferable to reduce the amount of > > memory that needs to be demand-faulted during precopy. Oops I meant to say "demand-faulted during post-copy" here. > I'm curious how you're going about evaluating this, as I've experimented with > 2M dirty tracking in the past, in a continuous checkpointing context however. > I suspect it's very sensitive to the workload. If the coarser granularity > leads to more memory being considered dirty, the length of pre-copy rounds > increases, giving the workload more time to dirty even more memory. > Ideally large pages would be used only for regions that won't be dirty or > regions that would also be pretty much completely dirty when tracking at 4K. > But deciding the granularity adaptively is hard, doing 2M tracking instead > of 4K robs you of the very information you'd need to judge that. We're planning to look at how 2M tracking affects the amount of memory that needs to be demand-faulted during the post-copy phase for different workloads.