Prathu Baronia <prathu.baronia@xxxxxxxxxxx> writes: > The 04/11/2020 13:47, Alexander Duyck wrote: >> >> This is an interesting data point. So running things in reverse seems >> much more expensive than running them forward. As such I would imagine >> process_huge_page is going to be significantly more expensive then on >> ARM64 since it will wind through the pages in reverse order from the >> end of the page all the way down to wherever the page was accessed. >> >> I wonder if we couldn't simply process_huge_page to process pages in >> two passes? The first being from the addr_hint + some offset to the >> end, and then loop back around to the start of the page for the second >> pass and just process up to where we started the first pass. The idea >> would be that the offset would be enough so that we have the 4K that >> was accessed plus some range before and after the address hopefully >> still in the L1 cache after we are done. > That's a great idea, we were working on a similar idea for the v2 patch and you > suggesting this idea has reassured our approach. This will incorporate the > benefits of optimized memset and will keep the cache hot around the > faulting address. > > Earlier we had taken this offset as 0.5MB and after your response we have kept it > as 32KB. As we understand there is a trade-off associated with keeping this value > too high, we would really appreciate if you can suggest a method to derive an > appropriate value for this offset from the L1 cache size. I don't think we should only keep L1 cache hot. I think it is good to keep L2 cache hot too. That could be 1 MB on x86 machine. In theory, it's better to keep as much cache hot as possible. I understand that the benefit of cache-hot is offset by slower backward zeroing in your system. So you need to balance between them. But because backward zeroing is as fast as forward zeroing on x86, we should consider that too. Maybe we need to use two different implementations on x86 and ARM, or use some parameter to tune it for different architectures. Best Regards, Huang, Ying