On (09/17/15 08:21), Vlastimil Babka wrote: > On 09/15/2015 06:22 AM, Sergey Senozhatsky wrote: > >On (09/15/15 00:08), Dan Streetman wrote: > >[..] > > > >correct. a bit of internals: we don't scan all the zspages every > >time. each class has stats for allocated used objects, allocated > >used objects, etc. so we 'compact' only classes that can be > >compacted: > > > > static unsigned long zs_can_compact(struct size_class *class) > > { > > unsigned long obj_wasted; > > > > obj_wasted = zs_stat_get(class, OBJ_ALLOCATED) - > > zs_stat_get(class, OBJ_USED); > > > > obj_wasted /= get_maxobj_per_zspage(class->size, > > class->pages_per_zspage); > > > > return obj_wasted * class->pages_per_zspage; > > } > > > >if we can free any zspages (which is at least one page), then we > >attempt to do so. > > > >is compaction the root cause of the symptoms Vitaly observe? > > He mentioned the "compact_stalls" counter which in /proc/vmstat is for the > traditional physical memory compaction, not the zsmalloc-specific one. Which > would imply high-order allocations. Does zsmalloc try them first before > falling back to the order-0 zspages linked together manually? each zspage is a bunch (pages_per_zspage) of alloc_page() calls for (i = 0; i < class->pages_per_zspage; i++) { struct page *page; page = alloc_page(flags); if (!page) goto cleanup; INIT_LIST_HEAD(&page->lru); if (i == 0) { /* first page */ SetPagePrivate(page); set_page_private(page, 0); first_page = page; first_page->inuse = 0; } if (i == 1) set_page_private(first_page, (unsigned long)page); if (i >= 1) set_page_private(page, (unsigned long)first_page); if (i >= 2) list_add(&page->lru, &prev_page->lru); if (i == class->pages_per_zspage - 1) /* last page */ SetPagePrivate2(page); prev_page = page; } -ss -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>