On Fri, 2017-04-21 at 12:55 +0900, damien.lemoal@xxxxxxx wrote: > +static void dmz_shrink_mblock_cache(struct dmz_target *dmz, bool idle) > +{ > + struct dmz_mblock *mblk; > + unsigned int nr_mblks; > + > + if (!dmz->max_nr_mblks) > + return; > + > + if (idle) > + nr_mblks = dmz->min_nr_mblks; > + else > + nr_mblks = dmz->max_nr_mblks; > + > + while (atomic_read(&dmz->nr_mblks) > nr_mblks && > + !list_empty(&dmz->mblk_lru_list)) { > + mblk = list_first_entry(&dmz->mblk_lru_list, > + struct dmz_mblock, link); > + list_del_init(&mblk->link); > + rb_erase(&mblk->node, &dmz->mblk_rbtree); > + dmz_free_mblock(dmz, mblk); > + } > +} (off-list) Hello Damien, Is mblk_lru_list perhaps a cache that should be freed under memory pressure? If so, if you repost this patch series please add a shrinker (struct shrinker + register_shrinker()) such that this memory can be freed if memory pressure becomes too high. Thanks, Bart. -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel