2011/12/30 Tao Ma <tm@xxxxxx>: > On 12/30/2011 04:11 PM, KOSAKI Motohiro wrote: >> 2011/12/30 Tao Ma <tm@xxxxxx>: >>> In our test of mlock, we have found some severe performance regression >>> in it. Some more investigations show that mlocked is blocked heavily >>> by lur_add_drain_all which calls schedule_on_each_cpu and flush the work >>> queue which is very slower if we have several cpus. >>> >>> So we have tried 2 ways to solve it: >>> 1. Add a per cpu counter for all the pagevecs so that we don't schedule >>> and flush the lru_drain work if the cpu doesn't have any pagevecs(I >>> have finished the codes already). >>> 2. Remove the lru_add_drain_all. >>> >>> The first one has some problems since in our product system, all the cpus >>> are busy, so I guess there is very little chance for a cpu to have 0 pagevecs >>> except that you run several consecutive mlocks. >>> >>> From the commit log which added this function(8891d6da), it seems that we >>> don't have to call it. So the 2nd one seems to be both easy and workable and >>> comes this patch. >> >> Could you please show us your system environment and benchmark programs? >> Usually lru_drain_** is very fast than mlock() body because it makes >> plenty memset(page). > The system environment is: 16 core Xeon E5620. 24G memory. > > I have attached the program. It is very simple and just uses mlock/munlock. Because your test program is too artificial. 20sec/100000times = 200usec. And your program repeat mlock and munlock the exact same address. so, yes, if lru_add_drain_all() is removed, it become near no-op. but it's worthless comparision. none of any practical program does such strange mlock usage. But, 200usec is much than I measured before. I'll dig it a bit more. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href