On Fri, 2 Mar 2012, Mel Gorman wrote: > I considered using a seqlock but it isn't cheap. The read side is heavy > with the possibility that it starts spinning and incurs a read barrier > (looking at read_seqbegin()) here. The retry block incurs another read > barrier so basically it would not be no better than what is there currently > (which at a 4% performance hit, sucks) Oh. You dont have a read barrier? So your approach is buggy? We could have read a state before someone else incremented the seq counter, then cached it, then we read the counter, did the processing and found that the sequid was not changed? > In the case of seqlocks, a reader will backoff if a writer is in progress > but the page allocator doesn't need that which is why I felt it was ok You can just not use the writer section if you think that is ok. Doubt it but lets at least start using a known serialization construct that would allow us to fix it up if we find that we need to update multiple variables protected by the seqlock. > Allocation failure is an unusual situation that can trigger application > exit or an OOM so it's ok to treat it as a slow path. A normal seqlock > would retry unconditionally and potentially have to handle the case > where it needs to free the page before retrying which is pointless. It will only retry as long as the writer hold the "lock". Like a spinlock the holdoff times depends on the size of the critical section and initially you could just avoid having write sections. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>