On Tue, Aug 09, 2011 at 04:04:21AM -0700, Michel Lespinasse wrote: > - Use my proposed page count lock in order to avoid the race. One > would have to convert all get_page_unless_zero() sites to use it. I > expect the cost would be low but still measurable. I didn't yet focus at your problem after we talked about it at MM summit, but I seem to recall I suggested there to just get to the head page and always take the lock on it. split_huge_page only works at 2M aligned pages, the rest you don't care about. Getting to the head page compound_lock should be always safe. And that will still scale incredibly better than taking the lru_lock for the whole zone (which would also work). And it seems the best way to stop split_huge_page without having to alter the put_page fast path when it works on head pages (the only thing that gets into put_page complex slow path is the release of tail pages after get_user_pages* so it'd be nice if put_page fast path still didn't need to take locks). > - It'd be sweet if one could somehow record the time a THP page was > created, and wait for at least one RCU grace period *starting from the > recorded THP creation time* before splitting huge pages. In practice, > we would be very unlikely to have to wait since the grace period would > be already expired. However, I don't think RCU currently provides such > a mechanism - Paul, is this something that would seem easy to > implement or not ? This looks sweet. We could store a quiescent points generation counter in the page[1].something, if the page has the same generation of the last RCU quiescent point (vs rcu_read_lock) we synchronize_rcu before starting split_huge_page. split_huge_page is serialized through the anon_vma lock however, so we'd need to release the anon_vma lock, synchronize_rcu and retry and this time the page[1].something sequence counter would be older than the rcu generation counter and it'll proceed (maybe another thread or process will get there first but that's ok). I didn't have better ideas than yours above, but I'll keep thinking. > > When I make deactivate_page, I didn't consider that honestly. > > IMHO, It shouldn't be a problem as deactive_page hold a reference > > of page by pagevec_lookup so the page shouldn't be gone under us. > > Agree - it seems like you are guaranteed to already hold a reference > (but then a straight get_page should be sufficient, right ?) I hope this is not an issue because of the fact the page is guaranteed not to be THP when get_page_unless_zero runs on it. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>