Re: [LSF/MM TOPIC] Test cases to choose for demonstrating mm features or fixing mm bugs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue 29-01-19 21:43:28, Balbir Singh wrote:
> On Mon, Jan 28, 2019 at 12:34:42PM +0100, Michal Hocko wrote:
> > On Mon 28-01-19 22:20:33, Balbir Singh wrote:
> > > Sending a patch to linux-mm today has become a complex task. One of the
> > > reasons for the complexity is a lack of fundamental expectation of what
> > > tests to run.
> > > 
> > > Mel Gorman has a set of tests [1], but there is no easy way to select
> > > what tests to run. Some of them are proprietary (spec*), but others
> > > have varying run times. A single line change may require hours or days
> > > of testing, add to that complexity of configuration. It requires a lot
> > > of tweaking and frequent test spawning to settle down on what to run,
> > > what configuration to choose and benefit to show.
> > > 
> > > The proposal is to have a discussion on how to design a good sanity
> > > test suite for the mm subsystem, which could potentially include
> > > OOM test cases and known problem patterns with proposed changes
> > 
> > I am not sure I follow. So what is the problem you would like to solve.
> > If tests are taking too long then there is a good reason for that most
> > probably. Are you thinking of any specific tests which should be run or
> > even included to MM tests or similar?
> 
> Let me elaborate, everytime I think I find something interesting, in terms
> of something to develop/fix, I think of how to test the changes. I think
> for well established code (such as reclaim) or even other features, it's hard
> to find good test cases to run as a base to ensure that
> 
> 1. There is good coverage of tests against the changes
> 2. The right test cases have been run from a performance perspective
> 
> The reason I brought up the time was not the time for a single test,
> but all the tests cumulative in the absence of good guidance for
> (1) and (2) above.
> 
> IOW, what guidance can we provide to patch writers and bug fixers in terms
> of what testing to carry out? How do we avoid biases in results and
> ensure consistency?

Well, I am afraid that there is no reference workload for the reclaim
behavior or many other heuristics MM uses. This will always be workload
dependant. Mel's mm-tests have a wider variety of workloads. There might
be more of course. The most important part is how those represent real
workload people do care about.

Abstracting workloads which are not in the test suits yet is definitely
a step in the right direction.
-- 
Michal Hocko
SUSE Labs




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux