On (23/02/23 15:09), Minchan Kim wrote: > > On Thu, Feb 23, 2023 at 12:04:46PM +0900, Sergey Senozhatsky wrote: > > This optimization has no effect. It only ensures that > > when a page was added to its corresponding fullness > > list, its "inuse" counter was higher or lower than the > > "inuse" counter of the page at the head of the list. > > The intention was to keep busy pages at the head, so > > they could be filled up and moved to the ZS_FULL > > fullness group more quickly. However, this doesn't work > > as the "inuse" counter of a page can be modified by > > zspage > > Let's use term zspage instead of page to prevent confusing. > > > obj_free() but the page may still belong to the same > > fullness list. So, fix_fullness_group() won't change > > Yes. I didn't expect it should be perfect from the beginning > but would help just little optimization. > > > the page's position in relation to the head's "inuse" > > counter, leading to a largely random order of pages > > within the fullness list. > > Good point. > > > > > For instance, consider a printout of the "inuse" > > counters of the first 10 pages in a class that holds > > 93 objects per zspage: > > > > ZS_ALMOST_EMPTY: 36 67 68 64 35 54 63 52 > > > > As we can see the page with the lowest "inuse" counter > > is actually the head of the fullness list. > > Let's write what the patch is doing cleary > > "So, let's remove the pointless optimization" or something better word. ACK to all feedback (for all the patches). Thanks!