Still no measurable progress on this one, but some new information. To recapitulate: --- vanilla 3.4 kernel + hacky min_filelist_kbytes patch + Minchan's patch below: >> > --- a/mm/vmscan.c >> > +++ b/mm/vmscan.c >> > @@ -2101,7 +2101,7 @@ static bool all_unreclaimable(struct zonelist *zonelist, >> > continue; >> > if (!cpuset_zone_allowed_hardwall(zone, GFP_KERNEL)) >> > continue; >> > - if (!zone->all_unreclaimable) >> > + if (zone->pages_scanned < zone_reclaimable_pages(zone) * 6) >> > return false; >> > } --- no longer running the Chrome browser; instead, running this synthetic load: several instances of a process that allocates 200MB, then touches some subset of its pages in an endless loop. The process's data segment compresses well (10:1). --- running the load on two similar systems: one ARM-based, the other x86-based. Both systems run the same kernel and the same image (different but equivalent configurations). Both have 2 GB RAM. On the x86 system, the mm behaves as expected. All 3 Gb of the zram device are consumed before OOM-kills happen. On the ARM system, OOM kills start happening when there are still about 2.1 GB of swap available. Because the compression ratio is so good, the zram disk is only using 100 to 150 MB. The systems are pretty similar. The x86 device has a rotating disk, vs. SSD on the ARM device. This could affect the speed of paging in code, but the program is very small so I don't think that's a factor. There are no messages from zram in the log. It could be either an ARM bug, or maybe the bug is on both systems, and the performance behavior on ARM is different enough to expose it. I will continue trying to figure out why kswapd isn't more proactive on ARM. Thanks! Luigi -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>