On Tue, Nov 12, 2019 at 04:56:52PM -0500, Johannes Weiner wrote: > On Tue, Nov 12, 2019 at 01:04:03PM -0800, Kees Cook wrote: > > On Tue, Nov 12, 2019 at 10:21:23AM -0500, Johannes Weiner wrote: > > > On Mon, Nov 11, 2019 at 05:35:37PM -0800, coverity-bot wrote: > > > > Hello! > > > > > > > > This is an experimental automated report about issues detected by Coverity > > > > from a scan of next-20191108 as part of the linux-next weekly scan project: > > > > https://scan.coverity.com/projects/linux-next-weekly-scan > > > > > > > > You're getting this email because you were associated with the identified > > > > lines of code (noted below) that were touched by recent commits: > > > > > > > > c34aa3085f94 ("mm-vmscan-split-shrink_node-into-node-part-and-memcgs-part-fix") > > > > > > > > Coverity reported the following: > > > > > > > > *** CID 1487844: Null pointer dereferences (NULL_RETURNS) > > > > /mm/vmscan.c: 2695 in shrink_node_memcgs() > > > > 2689 memcg = mem_cgroup_iter(target_memcg, NULL, NULL); > > > > 2690 do { > > > > 2691 struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); > > > > 2692 unsigned long reclaimed; > > > > 2693 unsigned long scanned; > > > > 2694 > > > > vvv CID 1487844: Null pointer dereferences (NULL_RETURNS) > > > > vvv Dereferencing a pointer that might be "NULL" "memcg" when calling "mem_cgroup_protected". > > > > 2695 switch (mem_cgroup_protected(target_memcg, memcg)) { > > > > > > This appears to be a false alarm. > > > > Okay, thanks! > > > > > All the "culprit" patch did was rename the local variable > > > "target_memcg". > > > > > > And while it's correct that memcg can be NULL (befor and after this > > > patch), it's the case only when mem_cgroup_disabled(), and > > > mem_cgroup_protected() checks for this case. > > > > Right, that's certainly the design. I wonder if in the interests of > > defensively asserting requirements, it would be worth adding something > > like this to mem_cgroup_protected(): > > > > if (WARN_ON_ONCE(!memcg)) > > return MEMCG_PROT_NONE; > > I'm having trouble enumerating the number of places where we would > crash in reclaim if memcg were zero while the mem controller is on. > > And even if we annotated all of them and dreamed up more or less > sensical exit values for all of these places, we'd quickly panic due > to failing page reclaim. > > There is no graceful exit strategy here. We may as well take the crash > right away, without having to clutter up the code. Okay, cool. I was just thinking mem_cgroup_protected() would be central enough since it's already tested in tons of places. Thanks for looking at it! -- Kees Cook