Re: [RFC] Memory tiering kernel alignment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 25, 2024 at 12:04:37PM -0800, David Rientjes wrote:
> On Thu, 25 Jan 2024, Matthew Wilcox wrote:
> > On Thu, Jan 25, 2024 at 10:26:19AM -0800, David Rientjes wrote:
> > > There is a lot of excitement around upcoming CXL type 3 memory expansion
> > > devices and their cost savings potential.  As the industry starts to
> > > adopt this technology, one of the key components in strategic planning is
> > > how the upstream Linux kernel will support various tiered configurations
> > > to meet various user needs.  I think it goes without saying that this is
> > > quite interesting to cloud providers as well as other hyperscalers :)
> > 
> > I'm not excited.  I'm disappointed that people are falling for this scam.
> > CXL is the ATM of this decade.  The protocol is not fit for the purpose
> > of accessing remote memory, adding 10ns just for an encode/decode cycle.
> > Hands up everybody who's excited about memory latency increasing by 17%.
> 
> Right, I don't think that anybody is claiming that we can leverage locally 
> attached CXL memory as through it was DRAM on the same or remote socket 
> and that there won't be a noticable impact to application performance 
> while the memory is still across the device.
> 
> It does offer several cost savings benefits for offloading of cold memory, 
> though, if locally attached and I think the support for that use case is 
> inevitable -- in fact, Linux has some sophisticated support for the 
> locally attached use case already.
> 
> > Then there are the lies from the vendors who want you to buy switches.
> > Not one of them are willing to guarantee you the worst case latency
> > through their switches.
> 
> I should have prefaced this thread by saying "locally attached CXL memory 
> expansion", because that's the primary focus of many of the folks on this 
> email thread :)

That's a huge relief.  I was not looking forward to the patches to add
support for pooling (etc).

Using CXL as cold-data-storage makes a certain amount of sense, although
I'm not really sure why it offers an advantage over NAND.  It's faster
than NAND, but you still want to bring it back locally before operating
on it.  NAND is denser, and consumes less power while idle.  NAND comes
with a DMA controller to move the data instead of relying on the CPU to
move the data around.  And of course moving the data first to CXL and
then to swap means that it's got to go over the memory bus multiple
times, unless you're building a swap device which attaches to the
other end of the CXL bus ...





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux