On 9/24/19 11:32 AM, David Hildenbrand wrote: > On 24.09.19 16:23, Michal Hocko wrote: >> On Wed 18-09-19 10:52:25, Alexander Duyck wrote: >> [...] >>> In order to try and keep the time needed to find a non-reported page to >>> a minimum we maintain a "reported_boundary" pointer. This pointer is used >>> by the get_unreported_pages iterator to determine at what point it should >>> resume searching for non-reported pages. In order to guarantee pages do >>> not get past the scan I have modified add_to_free_list_tail so that it >>> will not insert pages behind the reported_boundary. >>> >>> If another process needs to perform a massive manipulation of the free >>> list, such as compaction, it can either reset a given individual boundary >>> which will push the boundary back to the list_head, or it can clear the >>> bit indicating the zone is actively processing which will result in the >>> reporting process resetting all of the boundaries for a given zone. >> Is this any different from the previous version? The last review >> feedback (both from me and Mel) was that we are not happy to have an >> externally imposed constrains on how the page allocator is supposed to >> maintain its free lists. >> >> If this is really the only way to go forward then I would like to hear >> very convincing arguments about other approaches not being feasible. > Adding to what Alexander said, I don't consider the other approaches > (especially the bitmap-based approach Nitesh is currently working on) > infeasible. There might be more rough edges (e.g., sparse zones) and > eventually sometimes a little more work to be done, but definitely > feasible. Incorporating stuff into the buddy might make some tasks > (e.g., identify free pages) more efficient. My plan was to get a framework ready which can perform decently and is acceptable upstream (keeping core-mm changes to a minimum) and then keep optimizing it for different use-cases. Indeed, the bitmap-based approach may not be efficient for every available use case. But then I am not sure if we want to target that, considering it may require mm-changes. > I still somewhat like the idea of capturing hints of free pages (in > whatever data structure) and then going over the hints, seeing if the > pages are still free. Then only temporarily isolating the still-free > pages, reporting them, and un-isolating them after they were reported. I > like the idea that the pages are not fake-allocated but only temporarily > blocked. That works nicely e.g., with the movable zone (contain only > movable data). > > But anyhow, after decades of people working on free page > hinting/reporting, I am happy with anything that gets accepted upstream :D +1 > -- Nitesh