On Thu, Mar 05, 2020 at 08:17:46PM -0800, Hugh Dickins wrote: > On Tue, 3 Mar 2020, Alex Shi wrote: > > 在 2020/3/3 上午6:12, Andrew Morton 写道: > > >> Thanks for Testing support from Intel 0day and Rong Chen, Fengguang Wu, > > >> and Yun Wang. > > > I'm not seeing a lot of evidence of review and test activity yet. But > > > I think I'll grab patches 01-06 as they look like fairly > > > straightforward improvements. > > > > cc Fengguang and Rong Chen > > > > I did some local functional testing and kselftest, they all look fine. > > 0day only warn me if some case failed. Is it no news is good news? :) > > And now the bad news. > > Andrew, please revert those six (or seven as they ended up in mmotm). > 5.6-rc4-mm1 without them runs my tmpfs+loop+swapping+memcg+ksm kernel > build loads fine (did four hours just now), but 5.6-rc4-mm1 itself > crashed just after starting - seconds or minutes I didn't see, > but it did not complete an iteration. > > I thought maybe those six would be harmless (though I've not looked > at them at all); but knew already that the full series is not good yet: > I gave it a try over 5.6-rc4 on Monday, and crashed very soon on simpler > testing, in different ways from what hits mmotm. > > The first thing wrong with the full set was when I tried tmpfs+loop+ > swapping kernel builds in "mem=700M cgroup_disabled=memory", of course > with CONFIG_DEBUG_LIST=y. That soon collapsed in a splurge of OOM kills > and list_del corruption messages: __list_del_entry_valid < list_del < > __page_cache_release < __put_page < put_page < __try_to_reclaim_swap < > free_swap_and_cache < shmem_free_swap < shmem_undo_range. > > When I next tried with "mem=1G" and memcg enabled (but not being used), > that managed some iterations, no OOM kills, no list_del warnings (was > it swapping? perhaps, perhaps not, I was trying to go easy on it just > to see if "cgroup_disabled=memory" had been the problem); but when > rebooting after that, again list_del corruption messages and crash > (I didn't note them down). > > So I didn't take much notice of what the mmotm crash backtrace showed > (but IIRC shmem and swap were in it). > > Alex, I'm afraid you're focusing too much on performance results, > without doing the basic testing needed - I thought we had given you > some hints on the challenging areas (swapping, move_charge_at_immigrate, > page migration) when we attached a *correctly working* 5.3 version back > on 23rd August: > > https://lore.kernel.org/linux-mm/alpine.LSU.2.11.1908231736001.16920@eggly.anvils/ > > (Correctly working, except missing two patches I'd mistakenly dropped > as unnecessary in earlier rebases: but our discussions with Johannes > later showed to be very necessary, though their races rarely seen.) > > I have not had the time (and do not expect to have the time) to review > your series: maybe it's one or two small fixes away from being complete, > or maybe it's still fundamentally flawed, I do not know. I had naively > hoped that you would help with a patchset that worked, rather than > cutting it down into something which does not. I'm a bit confused by this. I, and I believe Alex, kept going down a different path because it didn't sound like there was a solution to the compaction race. As I remember, the conversation ended on this: : Your race here (again, lruvec lock taken then PageLRU observed, but : page->mem_cgroup changed in between) really questions my whole scheme: : I am not going to propose a solution now, I'll have to go back and : recheck my assumptions all over. Certainly isolate_migratepage_block() : has a harder job than any other, but I need to re-review it all. https://lore.kernel.org/lkml/alpine.LSU.2.11.1911221616580.1144@eggly.anvils/ That's certainly why I kept looking and eventually proposed using PageLRU clearing as a lock. Maybe there is a better way to do it, but I didn't see it. An LRU list corruption in page_cache_release() suggests a bug in the way this new locking scheme works or is applied - rather than a gratuitous divergence from your series that could have been avoided. > Submitting your series to routine testing is much easier for me than > reviewing it: but then, yes, it's a pity that I don't find the time > to report the results on intervening versions, which also crashed. > > What I have to do now, is set aside time today and tomorrow, to package > up the old scripts I use, describe them and their environment, and send > them to you (cc akpm in case I fall under a bus): so that you can > reproduce the crashes for yourself, and get to work on them. I think that would be very useful. tmpfs+loop+swapping+memcg+ksm kernel builds aren't exactly a go-to test case for most mm developers (although maybe they should be!)