On Mon 12-08-19 11:33:26, Sasha Levin wrote: [...] > I'd be happy to run whatever validation/regression suite for mm/ you > would suggest. You would have to develop one first and I am afraid that won't be really simple and useful. > I've heard the "every patch is a snowflake" story quite a few times, and > I understand that most mm/ patches are complex, but we agree that > manually testing every patch isn't scalable, right? Even for patches > that mm/ tags for stable, are they actually tested on every stable tree? > How is it different from the "aplies-it-must-be-ok workflow"? There is a human brain put in and process each patch to make sure that the change makes sense and we won't break none of many workloads that people care about. Even if you run your patch throug mm tests which is by far the most comprehensive test suite I know of we do regress from time to time. We simply do not have a realistic testing coverage becuase workload differ quite a lot and they are not really trivial to isolate to a self contained test case. A lot of functionality doesn't have a direct interface to test for because it triggers when the system gets into some state. Ideal? Not at all and I am happy to hear some better ideas. Until then we simply have to rely on gut feeling and understanding of the code and experience from workloads we have seen in the past. -- Michal Hocko SUSE Labs