On 10/09/2011 05:53 AM, Hillf Danton wrote:
When flushing TLB, if @vma is backed by huge page, huge TLB should be flushed, due to the fact that huge page is defined to be far from normal page, and the flushing is shorten a bit. Any comment is welcome.
Note that the current implementation works, but is not optimal.
Thanks Signed-off-by: Hillf Danton<dhillf@xxxxxxxxx> --- --- a/arch/mips/mm/tlb-r4k.c Mon May 30 21:17:04 2011 +++ b/arch/mips/mm/tlb-r4k.c Sun Oct 9 20:50:06 2011 @@ -120,22 +120,35 @@ void local_flush_tlb_range(struct vm_are if (cpu_context(cpu, mm) != 0) { unsigned long size, flags; + int huge = is_vm_hugetlb_page(vma); ENTER_CRITICAL(flags); - size = (end - start + (PAGE_SIZE - 1))>> PAGE_SHIFT; - size = (size + 1)>> 1; + if (huge) { + size = (end - start) / HPAGE_SIZE;
> + } else { > + size = (end - start + (PAGE_SIZE - 1))>> PAGE_SHIFT; > + size = (size + 1)>> 1; > + } Perhaps: if (huge) { start = round_down(start, HPAGE_SIZE); end = round_up(start, HPAGE_SIZE); size = (end - start) >> HPAGE_SHIFT; } else { start = round_down(start, PAGE_SIZE << 1); end = round_up(start, PAGE_SIZE << 1); size = (end - start) >> (PAGE_SHIFT + 1); } . . .
if (size<= current_cpu_data.tlbsize/2) {
Has anybody benchmarked this heuristic? I guess it seems reasonable.
int oldpid = read_c0_entryhi(); int newpid = cpu_asid(cpu, mm); - start&= (PAGE_MASK<< 1); - end += ((PAGE_SIZE<< 1) - 1); - end&= (PAGE_MASK<< 1); + if (huge) { + start&= HPAGE_MASK; + end&= HPAGE_MASK; + } else { + start&= (PAGE_MASK<< 1); + end += ((PAGE_SIZE<< 1) - 1); + end&= (PAGE_MASK<< 1); + }
This stuff is done above so is removed.
while (start< end) { int idx; write_c0_entryhi(start | newpid); - start += (PAGE_SIZE<< 1); + if (huge) + start += HPAGE_SIZE; + else + start += (PAGE_SIZE<< 1); mtc0_tlbw_hazard(); tlb_probe(); tlb_probe_hazard();
If we do something like that, then... Acked-by: David Daney <david.daney@xxxxxxxxxx>