Re: [PATCH 0/3] TLB flush multiple pages per IPI v5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Mel Gorman <mgorman@xxxxxxx> wrote:

> > I think since it is you who wants to introduce additional complexity into the 
> > x86 MM code the burden is on you to provide proof that the complexity of pfn 
> > (or struct page) tracking is worth it.
> 
> I'm taking a situation whereby IPIs are sent like crazy with interrupt storms 
> and replacing it with something that is a lot more efficient that minimises the 
> number of potential surprises. I'm stating that the benefit of PFN tracking is 
> unknowable in the general case because it depends on the workload, timing and 
> the exact CPU used so any example provided can be naked with a counter-example 
> such as a trivial sequential reader that shows no benefit. The series as posted 
> is approximately in line with current behaviour minimising the chances of 
> surprise regressions from excessive TLB flush.
> 
> You are actively blocking a measurable improvement and forcing it to be replaced 
> with something whose full impact is unquantifiable. Any regressions in this area 
> due to increased TLB misses could take several kernel releases as the issue will 
> be so difficult to detect.
> 
> I'm going to implement the approach you are forcing because there is an x86 part 
> of the patch and you are the maintainer that could indefinitely NAK it. However, 
> I'm extremely pissed about being forced to introduce these indirect 
> unpredictable costs because I know the alternative is you dragging this out for 
> weeks with no satisfactory conclusion in an argument that I cannot prove in the 
> general case.

Stop this crap.

I made a really clear and unambiguous chain of arguments:

 - I'm unconvinced about the benefits of INVLPG in general, and your patches adds
   a whole new bunch of them. I cited measurements and went out on a limb to 
   explain my position, backed with numbers and logic. It's admittedly still a 
   speculative position and I might be wrong, but I think it's well grounded 
   position that you cannot just brush aside.

 - I suggested that you split this approach into steps that first does the simpler
   approach that will give us at least 95% of the benefits, then the more complex
   one on top of it. Your false claim that I'm blocking a clear improvement is
   pure demagogy!

 - I very clearly claimed that I am more than willing to be convinced by numbers.
   It's not _that_ hard to construct a memory trashing workload with a
   TLB-efficient iteration that uses say 80% of the TLB cache, to measure the
   worst-case overhead of full flushes.

I'm really sick of this partly deceptive, partly passive-aggressive discussion 
style that seems to frequently permeate VM discussions and which made sched/numa 
such a huge PITA in the past...

And I think the numbers in the v6 series you submitted today support my position, 
so you owe me an apology I think ...

Thanks,

	Ingo

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]