> > @@ -1835,8 +1978,6 @@ static void shrink_zone(int priority, st > > break; > > } > > > > - sc->nr_reclaimed = nr_reclaimed; > > - > > /* > > * Even if we did not try to evict anon pages at all, we want to > > * rebalance the anon lru active/inactive ratio. > > @@ -1844,6 +1985,23 @@ static void shrink_zone(int priority, st > > if (inactive_anon_is_low(zone, sc)) > > shrink_active_list(SWAP_CLUSTER_MAX, zone, sc, priority, 0); > > > > + /* > > + * Don't shrink slabs when reclaiming memory from > > + * over limit cgroups > > + */ > > + if (sc->may_reclaim_slab) { > > + struct reclaim_state *reclaim_state = current->reclaim_state; > > + > > + shrink_slab(zone, sc->nr_scanned - nr_scanned, > > Doubtful calculation. What mean "sc->nr_scanned - nr_scanned"? > I think nr_scanned simply keep old slab balancing behavior. And per-zone reclaim can lead to new issue. On 32bit highmem system, theorically the system has following memory usage. ZONE_HIGHMEM: 100% used for page cache ZONE_NORMAL: 100% used for slab So, traditional page-cache/slab balancing may not work. I think following new calculation or somethinhg else is necessary. if (zone_reclaimable_pages() > NR_SLAB_RECLAIMABLE) { using current calculation } else { shrink number of "objects >> reclaim-priority" objects (as page cache scanning calculation) } However, it can be separate this patch, perhaps. > > > > + lru_pages, global_lru_pages, sc->gfp_mask); > > + if (reclaim_state) { > > + nr_reclaimed += reclaim_state->reclaimed_slab; > > + reclaim_state->reclaimed_slab = 0; > > + } > > + } > > + > > + sc->nr_reclaimed = nr_reclaimed; > > + > > throttle_vm_writeout(sc->gfp_mask); > > } -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>