On Sun, 24 Nov 2013 18:38:26 -0500 Johannes Weiner <hannes@xxxxxxxxxxx> wrote: > ... > > + * Access frequency and refault distance > + * > + * A workload is trashing when its pages are frequently used but they > + * are evicted from the inactive list every time before another access > + * would have promoted them to the active list. > + * > + * In cases where the average access distance between thrashing pages > + * is bigger than the size of memory there is nothing that can be > + * done - the thrashing set could never fit into memory under any > + * circumstance. > + * > + * However, the average access distance could be bigger than the > + * inactive list, yet smaller than the size of memory. In this case, > + * the set could fit into memory if it weren't for the currently > + * active pages - which may be used more, hopefully less frequently: > + * > + * +-memory available to cache-+ > + * | | > + * +-inactive------+-active----+ > + * a b | c d e f g h i | J K L M N | > + * +---------------+-----------+ So making the inactive list smaller will worsen this problem? If so, don't we have a conflict with this objective: > Right now we have a fixed ratio (50:50) between inactive and active > list but we already have complaints about working sets exceeding half > of memory being pushed out of the cache by simple streaming in the > background. Ultimately, we want to adjust this ratio and allow for a > much smaller inactive list. ? > + * It is prohibitively expensive to accurately track access frequency > + * of pages. But a reasonable approximation can be made to measure > + * thrashing on the inactive list, after which refaulting pages can be > + * activated optimistically to compete with the existing active pages. > + * > + * Approximating inactive page access frequency - Observations: > + * > + * 1. When a page is accesed for the first time, it is added to the "accessed" > + * head of the inactive list, slides every existing inactive page > + * towards the tail by one slot, and pushes the current tail page > + * out of memory. > + * > > ... > -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html