RE: zsmalloc limitations and related topics

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> From: Dan Magenheimer
> Subject: RE: zsmalloc limitations and related topics
> 
> > > I would welcome ideas on how to evaluate workloads for
> > > "representativeness".  Personally I don't believe we should
> > > be making decisions about selecting the "best" algorithms
> > > or merging code without an agreement on workloads.
> >
> > I'd argue that there is no such thing as a "representative workload".
> > Instead, we try different workloads to validate the design and illustrate
> > the performance characteristics and impacts.
> 
> Sorry for repeatedly hammering my point in the above, but
> there have been many design choices driven by what was presumed
> to be representative (kernbench and now SPECjbb) workload
> that may be entirely wrong for a different workload (as
> Seth once pointed out using the text of Moby Dick as a source
> data stream).
> 
> Further, the value of different designs can't be measured here just
> by the workload because the pages chosen to swap may be completely
> independent of the intended workload-driver... i.e. if you track
> the pid of the pages intended for swap, the pages can be mostly
> pages from long-running or periodic system services, not pages
> generated by kernbench or SPECjbb.  So it is the workload PLUS the
> environment that is being measured and evaluated.  That makes
> the problem especially tough.
> 
> Just to clarify, I'm not suggesting that there is any single
> workload that can be called representative, just that we may
> need both a broad set of workloads (not silly benchmarks) AND
> some theoretical analysis to drive design decisions.  And, without
> this, arguing about whether zsmalloc is better than zbud or not
> is silly.  Both zbud and zsmalloc have strengths and weaknesses.
> 
> That said, it should also be pointed out that the stream of
> pages-to-compress from cleancache ("file pages") may be dramatically
> different than for frontswap ("anonymous pages"), so unless you
> and Seth are going to argue upfront that cleancache pages should
> NEVER be candidates for compression, the evaluation criteria
> to drive design decisions needs to encompass both anonymous
> and file pages.  It is currently impossible to evaluate that
> with zswap.

Sorry to reply to myself here, but I realized last night that
I left off another related important point:

We have a tendency to run benchmarks on a "cold" system
so that the results are reproducible.  For compression however,
this may unnaturally skew the entropy of data-pages-to-be-compressed
and so also the density measurements.

I can't prove it, but I suspect that soon after boot the number
of anonymous pages containing all (or nearly all) zeroes is large,
i.e. entropy is low.  As the length of time grows since the system
booted, more anonymous pages will be written with non-zero
data, thus increasing entropy and decreasing compressibility.

So, over time, the distribution of zsize may slowly skew right
(toward PAGE_SIZE).

If so, this effect may be very real but very hard to observe.

Dan

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]