From: Nitin Gupta <nitin.m.gupta@xxxxxxxxxx> Date: Mon, 1 Feb 2016 19:21:21 -0800 > During hugepage unmap, TLB flush is currently issued > at every PAGE_SIZE'd boundary which is unnecessary. We > now issue the flush at REAL_HPAGE_SIZE boundaries only. > > Without this patch workloads which unmap a large hugepage > backed VMA region get CPU lockups due to excessive TLB > flush calls. > > Signed-off-by: Nitin Gupta <nitin.m.gupta@xxxxxxxxxx> Thanks for finding this but we'll need a few adjustments to your patch. First of all, you can't do the final TLB flush of each REAL_HPAGE_SIZE entry until all of the PTE's that cover that region have been cleared. Otherwise a TLB miss on any cpu can reload the entry after you've flushed it. Second, the stores should be done in a way such that they are done in-order and consequetively in order to optimize store buffer compression. I would recommend clearing all of the PTE's and then executing the two TLB and TSB flushes right afterwards as an independant operation and not via pte_clear(). -- To unsubscribe from this list: send the line "unsubscribe sparclinux" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html