Re: [External] RE(2): FW: [LSF/MM/BPF TOPIC] SMDK inspired MM changes for CXL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On Apr 1, 2023, at 4:51 AM, Gregory Price <gregory.price@xxxxxxxxxxxx> wrote:
> 
> On Tue, Apr 04, 2023 at 11:59:22AM -0700, Viacheslav A.Dubeyko wrote:
>> 
>> 
>>> On Apr 1, 2023, at 3:51 AM, Gregory Price <gregory.price@xxxxxxxxxxxx> wrote:
>>> 
>>> On Tue, Apr 04, 2023 at 05:58:05PM +0000, Adam Manzanares wrote:
>>>> On Tue, Apr 04, 2023 at 11:31:08AM +0300, Mike Rapoport wrote:
>>>>> 
>>>>> The point of zswap IIUC is to have small and fast swap device and
>>>>> compression is required to better utilize DRAM capacity at expense of CPU
>>>>> time.
>>>>> 
>>>>> Presuming CXL memory will have larger capacity than DRAM, why not skip the
>>>>> compression and use CXL as a swap device directly?
>>>> 
>>>> I like to shy away from saying CXL memory should be used for swap. I see a 
>>>> swap device as storing pages in a manner that is no longer directly addressable
>>>> by the cpu. 
>>>> 
>>>> Migrating pages to a CXL device is a reasonable approach and I believe we
>>>> have the ability to do this in the page reclaim code. 
>>>> 
>>> 
>>> The argument is "why do you need swap if memory itself is elastic", and
>>> I think there are open questions about how performant using large
>>> amounts of high-latency memory is.
>>> 
>>> Think 1us-1.5us+ cross-rack attached memory.
>>> 
>>> Does it make sense to use that as CPU-addressible and migrate it on
>>> first use?  Isn't that just swap with more steps?  What happens if we
>>> just use it as swap, is the performance all that different?
>>> 
>>> I think there's a reasonable argument for exploring the idea at the
>>> higher ends of the latency spectrum.  And the simplicity of using an
>>> existing system (swap) to implement a form of proto-tiering is rather
>>> attractive in my opinion.
>>> 
>> 
>> I think the problem with swap that we need to take into account the additional
>> latency of swap-in/swap-out logic. I assume that this logic is expensive enough.
>> And if we considering the huge graph, for example, I am afraid the swap-in/swap-out
>> logic could be expensive. So, the question here is about use-case. Which use-case could
>> have benefits to employ the swap as a big space of high-latency memory? I see your point
>> that such swap could be faster than persistent storage. But which use-case can be happy
>> user of this space of high-latency memory?
>> 
>> Thanks,
>> Slava.
>> 
> 
> Just spitballing here - to me this problem is two fold:
> 
> I think the tiering use case and the swap use case are exactly the same.
> If tiering is sufficiently valuable, there exists a spectrum of compute
> density (cpu:dram:cxl:far-cxl) where simply using far-cxl as fast-swap
> becomes easier and less expensive than a complex tiering system.
> 
> So rather than a single use-case question, it reads like a tiering
> question to me:
> 
> 1) Where on the 1us-20us (far cxl : nvme) spectrum does it make sense to
>   switch from a swap mechanism to simply byte-addressable memory?
>   There's a point, somewhere, where promote on first access (effectively
>   swap) is the same performance as active tiering (for a given workload).
> 
>   If that point is under 2us, there's a good chance that a high-latency
>   CXL swap-system would be a major win for any workload on any cloud-based
>   system.  It's simple, clean, and reclaim doesn't have to worry about the
>   complexities of hotpluggable memory zones.
> 
> 
> Beyond that, to your point, what use-case is happy with this class of
> memory, and in what form?
> 
> 2) This is likely obscurred by the fact that many large-memory
>   applications avoid swap like the plague by sharding data and creating
>   clusters. So it's hard to answer this until it's tested, and you
>   can't test it unless you make it... woo!
> 
>   Bit of a chicken/egg in here.  I don't know that anyone can say
>   definitively what workload can make use of it, but that doesn't mean
>   there isn't one.  So in the spectrum of risk/reward, at least
>   enabling some simple mechanism for the sake of exploration feels
>   exciting to say the least.
> 
> 
> More generally, I think a cxl-swap (cswap? ;V) would be useful exactly to
> help identify when watch-and-wait tiering becomes more performant than
> promote-on-first-use.  If you can't beat a simple fast-swap, why bother?
> 
> Again, I think this is narrowly applicable to high-latency CXL. My gut
> tells me that anything under 1us is better used in a byte-addressable
> manner, but once you start hitting 1us "It makes me go hmmm..."
> 
> I concede this is largely conjecture until someone tests it out, but
> certainly a fun thing to discess.
> 

OK. I am buying your point. :) But, at first I need to allocate memory.
The really important point of CXL memory is the opportunity to extend
the memory space. So, swap is not addressable memory and it is useless
for memory space extension. Let’s imagine I have small local DRAM (and
maybe some amount of “fast” CXL) + huge far CXL as swap space. But I cannot
use the swap space for allocation. So, this swap looks like useless space.
At first, I need to extend my memory by means of “fast” CXL. And if I have
enough “fast” CXL, then I don’t need in far CXL memory. OK, it’s always
not enough memory but we are hungry for addressable memory.

Large memory application would like to see the whole data set in memory.
But it means that this data set needs to be addressable. Technically speaking,
it is possible to imagine that partially data set can be in the swap.
But the first step is memory allocation and prefetching data from persistent
memory. Bus, as far as I can imagine, memory allocator will be limited by
addressable memory. So, I cannot have the whole data set in memory because
memory allocator stops me.

Thanks,
Slava.





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux