On Fri, Oct 18, 2019 at 1:32 AM Dave Hansen <dave.hansen@xxxxxxxxx> wrote: > > On 10/17/19 9:01 AM, Suleiman Souhlal wrote: > > One problem that came up is that if you get into direct reclaim, > > because persistent memory can have pretty low write throughput, you > > can end up stalling users for a pretty long time while migrating > > pages. > > Basically, you're saying that memory load spikes turn into latency spikes? Yes, exactly. > FWIW, we have been benchmarking this sucker with benchmarks that claim > to care about latency. In general, compared to DRAM, we do see worse > latency, but nothing catastrophic yet. I'd be interested if you have > any workloads that act as reasonable proxies for your latency requirements. Sorry, I don't know of any specific workloads I can share. :-( Maybe Jonathan or Shakeel have something more. I realize it's not very useful without giving specific examples, but even disregarding persistent memory, we've had latency issues with direct reclaim when using zswap. It's been such a problem that we're conducting experiments with not doing zswap compression in direct reclaim (but still doing it proactively). The low write throughput of persistent memory would make this worse. I think the case where we're most likely to run into this is when the machine is close to OOM situation and we end up thrashing rather than OOM killing. Somewhat related, I noticed that this patch series ratelimits migrations from persistent memory to DRAM, but it might also make sense to ratelimit migrations from DRAM to persistent memory. If all the write bandwidth is taken by migrations, there might not be any more available for applications accessing pages in persistent memory, resulting in higher latency. Another issue we ran into, that I think might also apply to this patch series, is that because kernel memory can't be allocated on persistent memory, it's possible for all of DRAM to get filled by user memory and have kernel allocations fail even though there is still a lot of free persistent memory. This is easy to trigger, just start an application that is bigger than DRAM. To mitigate that, we introduced a new watermark for DRAM zones above which user memory can't be allocated, to leave some space for kernel allocations. -- Suleiman