On Wed, Jan 22, 2014 at 01:57:14AM -0500, Johannes Weiner wrote: > Not at this time, I'll try to look into that. For now, I am updating > the patch to revert the shrinker back to DEFAULT_SEEKS and change the > object count to only include objects above a certain threshold, which > assumes a worst-case population of 4 in 64 slots. It's not perfect, > but neither was the seeks magic, and it's easier to reason about what > it's actually doing. Ah, the quality of 2am submissions... 8 out of 64 of course. > @@ -266,14 +269,38 @@ struct list_lru workingset_shadow_nodes; > static unsigned long count_shadow_nodes(struct shrinker *shrinker, > struct shrink_control *sc) > { > - return list_lru_count_node(&workingset_shadow_nodes, sc->nid); > + unsigned long shadow_nodes; > + unsigned long max_nodes; > + unsigned long pages; > + > + shadow_nodes = list_lru_count_node(&workingset_shadow_nodes, sc->nid); > + pages = node_present_pages(sc->nid); > + /* > + * Active cache pages are limited to 50% of memory, and shadow > + * entries that represent a refault distance bigger than that > + * do not have any effect. Limit the number of shadow nodes > + * such that shadow entries do not exceed the number of active > + * cache pages, assuming a worst-case node population density > + * of 1/16th on average. 1/8th. The actual code is consistent: > + * On 64-bit with 7 radix_tree_nodes per page and 64 slots > + * each, this will reclaim shadow entries when they consume > + * ~2% of available memory: > + * > + * PAGE_SIZE / radix_tree_nodes / node_entries / PAGE_SIZE > + */ > + max_nodes = pages >> (1 + RADIX_TREE_MAP_SHIFT - 3); > + > + if (shadow_nodes <= max_nodes) > + return 0; > + > + return shadow_nodes - max_nodes; > } > > static enum lru_status shadow_lru_isolate(struct list_head *item, > spinlock_t *lru_lock, > void *arg) > { > - unsigned long *nr_reclaimed = arg; > struct address_space *mapping; > struct radix_tree_node *node; > unsigned int i; -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html