On Tue, 18 Oct 2011, Dimitri Sivanich wrote: > After further testing, substantial increases in ZVC delta along with cache alignment > of the vm_stat array bring the tmpfs writeback throughput numbers to about where > they are with vm.overcommit_memory==OVERCOMMIT_NEVER. I still need to determine how > high the ZVC delta needs to be to achieve this performance, but it is greater than 125. Sounds like this is the way to go then. > Would it make sense to have the ZVC delta be tuneable (via /proc/sys/vm?), keeping the > same default behavior as what we currently have? I think so. > If the thresholds get set higher, it could be that some values that don't normally have > as big a delta may not get updated frequently enough. Should we maybe update all values > everytime a threshold is hit, as the patch below was intending? Mel can probably chime in on the accuracy needed for reclaim etc. We already have an automatic reduction of the delta if the vm gets into problems. > Note that having each counter in a separate cacheline does not have much, if any, > effect. It may have a good effect if you group the counters according to their uses into different cachelines. Counters that are typically updates together need to be close to each other. Also you could modify my patch to only update counters in the same cacheline. I think doing all counters caused the problems with that patch because we now touch multiple cachelines and increase the cache footprint of critical vm functions. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>