RE: [PATCH 7/8] zswap: add to mm/

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> From: Dave Hansen [mailto:dave@xxxxxxxxxxxxxxxxxx]
> Subject: Re: [PATCH 7/8] zswap: add to mm/
> 
> On 01/01/2013 09:52 AM, Seth Jennings wrote:
> > On 12/31/2012 05:06 PM, Dan Magenheimer wrote:
> >> A second related issue that concerns me is that, although you
> >> are now, like zcache2, using an LRU queue for compressed pages
> >> (aka "zpages"), there is no relationship between that queue and
> >> physical pageframes.  In other words, you may free up 100 zpages
> >> out of zswap via zswap_flush_entries, but not free up a single
> >> pageframe.  This seems like a significant design issue.  Or am
> >> I misunderstanding the code?
> >
> > You understand correctly.  There is room for optimization here and it
> > is something I'm working on right now.
> 
> It's the same "design issue" that the slab shrinkers have, and they are
> likely to have some substantially consistently smaller object sizes.

Understood Dave.  However if one compares the total percentage
of RAM used for zpages by zswap vs the total percentage of RAM
used by slab, I suspect that the zswap number will dominate,
perhaps because zswap is storing primarily data and slab is
storing primarily metadata?

I don't claim to be any kind of expert here, but I'd imagine
that MM doesn't try to manage the total amount of slab space
because slab is "a cost of doing business".  However, for
in-kernel compression to be widely useful, IMHO it will be
critical for MM to somehow load balance between total pageframes
used for compressed pages vs total pageframes used for
normal pages, just as today it needs to balance between
active and inactive pages.
 
> >> A third concern is about scalability... the locking seems very
> >
> > The reason the coarse lock isn't a problem for zswap like the hash
> 
> Lock hold times don't often dominate lock cost these days.  The limiting
> factor tends to be the cost of atomic operations to bring the cacheline
> over to the CPUs acquiring the lock.

[I'll bow out of the scalability discussion as long as someone
else is thinking about it.]

Dan

_______________________________________________
devel mailing list
devel@xxxxxxxxxxxxxxxxxxxxxx
http://driverdev.linuxdriverproject.org/mailman/listinfo/devel


[Index of Archives]     [Linux Driver Backports]     [DMA Engine]     [Linux GPIO]     [Linux SPI]     [Video for Linux]     [Linux USB Devel]     [Linux Coverity]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux