2020년 5월 27일 (수) 오후 10:43, Johannes Weiner <hannes@xxxxxxxxxxx>님이 작성: > > On Wed, May 27, 2020 at 11:06:47AM +0900, Joonsoo Kim wrote: > > 2020년 5월 21일 (목) 오전 8:26, Johannes Weiner <hannes@xxxxxxxxxxx>님이 작성: > > > > > > We activate cache refaults with reuse distances in pages smaller than > > > the size of the total cache. This allows new pages with competitive > > > access frequencies to establish themselves, as well as challenge and > > > potentially displace pages on the active list that have gone cold. > > > > > > However, that assumes that active cache can only replace other active > > > cache in a competition for the hottest memory. This is not a great > > > default assumption. The page cache might be thrashing while there are > > > enough completely cold and unused anonymous pages sitting around that > > > we'd only have to write to swap once to stop all IO from the cache. > > > > > > Activate cache refaults when their reuse distance in pages is smaller > > > than the total userspace workingset, including anonymous pages. > > > > Hmm... I'm not sure the correctness of this change. > > > > IIUC, this patch leads to more activations in the file list and more activations > > here will challenge the anon list since rotation ratio for the file > > list will be increased. > > Yes. > > > However, this change breaks active/inactive concept of the file list. > > active/inactive > > separation is implemented by in-list refault distance. anon list size has > > no direct connection with refault distance of the file list so using > > anon list size > > to detect workingset for file page breaks the concept. > > This is intentional, because there IS a connection: they both take up > space in RAM, and they both cost IO to bring back once reclaimed. I know that. This is the reason that I said 'no direct connection'. The anon list size is directly related the *possible* file list size. But, active/inactive separation in one list is firstly based on *current* list size rather than the possible list size. Adding anon list size to detect workingset means to use the possible list size and I think that it's wrong. > When file is refaulting, it means we need to make more space for > cache. That space can come from stale active file pages. But what if > active cache is all hot, and meanwhile there are cold anon pages that > we could swap out once and then serve everything from RAM? > > When file is refaulting, we should find the coldest data that is > taking up RAM and kick it out. It doesn't matter whether it's file or > anon: the goal is to free up RAM with the least amount of IO risk. I understand your purpose and agree with it. We need to find a solution. To achieve your goal, my suggestion is: - refault distance < active file, then do activation and add up IO cost - refault distance < active file + anon list, then add up IO cost This doesn't break workingset detection on file list and challenge the anon list as the same degree as you did. > Remember that the file/anon split, and the inactive/active split, are > there to optimize reclaim. It doesn't mean that these memory pools are > independent from each other. > > The file list is split in two because of use-once cache. The anon and > file lists are split because of different IO patterns, because we may > not have swap etc. But once we are out of use-once cache, have swap > space available, and have corrected for the different cost of IO, > there needs to be a relative order between all pages in the system to > find the optimal candidates to reclaim. > > > My suspicion is started by this counter example. > > > > Environment: > > anon: 500 MB (so hot) / 500 MB (so hot) > > file: 50 MB (hot) / 50 MB (cold) > > > > Think about the situation that there is periodical access to other file (100 MB) > > with low frequency (refault distance is 500 MB) > > > > Without your change, this periodical access doesn't make thrashing for cached > > active file page since refault distance of periodical access is larger > > than the size of > > the active file list. However, with your change, it causes thrashing > > on the file list. > > It doesn't cause thrashing. It causes scanning because that 100M file > IS thrashing: with or without my patch, that refault IO is occuring. It could cause thrashing for your patch. Without the patch, current logic try to find most hottest file pages that are fit into the current file list size and protect them successfully. Assume that access distance of 50 MB hot file pages is 60 MB which is less than whole file list size but larger than inactive list size. Without your patch, 50 MB (hot) pages are not evicted at all. All these hot pages will be protected from the 100MB low access frequency pages. 100 MB low access frequency pages will be refaulted repeatedely but it's correct behaviour. However, with your patch, 50 MB (hot) file pages are deactivated due to newly added file pages with low access frequency. And, then, since access distance of 50 MB (hot) pages is larger than inactive list size, they could not get a second chance and finally could be evicted. I think that this is a thrashing since low access frequency pages that are not fit for current file list size pushes out the high access frequency pages that are fit for current file list size and it would happen again and again. Maybe, logic can be corrected if the patch considers inactive age of anon list but I think that my above suggestion would be enough. > What this patch acknowledges is that the 100M file COULD fit fully > into memory, and not require any IO to serve, IFF 100M of the active > file or anon pages were cold and could be reclaimed or swapped out. > > In your example, the anon set is hot. We'll scan it slowly (at the > rate of IO from the other file) and rotate the pages that are in use - > which would be all of them. Likewise for the file - there will be some > deactivations, but mark_page_accessed() or the second chance algorithm > in page_check_references() for mapped will keep the hottest pages active. > In a slightly modified example, 400M of the anon set is hot and 100M > cold. Without my patch, we would never look for them and the second > file would be IO-bound forever. After my patch, we would scan anon, > eventually find the cold pages, swap them out, and then serve the > entire workingset from memory. Again, I agree with your goal. What I don't agree is the implementation to achieve the goal. Thanks.