Re: mm performance with zram

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 8, 2015 at 10:30 PM, Andrew Morton
<akpm@xxxxxxxxxxxxxxxxxxxx> wrote:
> On Thu, 8 Jan 2015 14:49:45 -0800 Luigi Semenzato <semenzato@xxxxxxxxxx> wrote:
>
>> I am taking a closer look at the performance of the Linux MM in the
>> context of heavy zram usage.  The bottom line is that there is
>> surprisingly high overhead (35-40%) from MM code other than
>> compression/decompression routines.
>
> Those images hurt my eyes.

Sorry about that.  I didn't find other ways of computing the
cumulative cost of functions (i.e. time spent in a function and all
its descendants, like in gprof).  I couldn't get perf to do that
either.  A flat profile shows most functions take a fracion of 1%, so
it's not useful.  If anybody knows a better way I'll be glad to use
it.

> Did you work out where the time is being spent?

No, unfortunately it's difficult to make sense of the graph profile as
well, especially with my low familiarity with the code.  There is a
surprising number of different callers into the heaviest nodes and I
cannot tell which paths correspond to which high-level actions.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]