Hi, I recently noticed how a Stream benchmark took 30% more time in the first iteration due to having to clean pages in the output array. Especially clearing a huge page on a pagefault is a substantial overhead. It affects the cached data of the workload while it is running and reduces available memory bandwidth. Is there a reason Linux does not do background page clearing like other OSes to reduce this overhead? It would be a good fit for typical mobile workloads (bursts of high activity followed by periods of low activity). Wilco -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href