Hi Dan, Today's linux-next merge of the cleancache tree got a conflict in mm/Kconfig between commit 6fc80ef491b981f59233beaf6aeaccc0c947031d ("percpu: use percpu allocator on UP too") from the slab tree and commit 52f08871df905eec43d34d20102cbaf8e397e280 ("mm: cleancache core ops functions and config") from the cleancache tree. Just overlapping additions. I fixed it up (see below) and can carry the fax as necessary. -- Cheers, Stephen Rothwell sfr@xxxxxxxxxxxxxxxx diff --cc mm/Kconfig index c2c8a4a,9ee0751..0000000 --- a/mm/Kconfig +++ b/mm/Kconfig @@@ -302,10 -302,24 +302,32 @@@ config NOMMU_INITIAL_TRIM_EXCES See Documentation/nommu-mmap.txt for more information. +# +# UP and nommu archs use km based percpu allocator +# +config NEED_PER_CPU_KM + depends on !SMP + bool + default y ++ + config CLEANCACHE + bool "Enable cleancache pseudo-RAM driver to cache clean pages" + default y + help + Cleancache can be thought of as a page-granularity victim cache + for clean pages that the kernel's pageframe replacement algorithm + (PFRA) would like to keep around, but can't since there isn't enough + memory. So when the PFRA "evicts" a page, it first attempts to put + it into a synchronous concurrency-safe page-oriented pseudo-RAM + device (such as Xen's Transcendent Memory, aka "tmem") which is not + directly accessible or addressable by the kernel and is of unknown + (and possibly time-varying) size. And when a cleancache-enabled + filesystem wishes to access a page in a file on disk, it first + checks cleancache to see if it already contains it; if it does, + the page is copied into the kernel and a disk access is avoided. + When a pseudo-RAM device is available, a significant I/O reduction + may be achieved. When none is available, all cleancache calls + are reduced to a single pointer-compare-against-NULL resulting + in a negligible performance hit. + + If unsure, say Y to enable cleancache -- To unsubscribe from this list: send the line "unsubscribe linux-next" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html