Hi, Mike, Mike Snitzer <snitzer@xxxxxxxxxx> writes: > Looking at Mikulas' wrapper API that you and hch are calling into > question: > > For ARM it is using arch/arm64/mm/flush.c:arch_wb_cache_pmem(). > (And ARM does seem to be providing CONFIG_ARCH_HAS_PMEM_API.) > > Whereas x86_64 is using memcpy_flushcache() as provided by > CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE. > (Yet ARM does provide arch/arm64/lib/uaccess_flushcache.c:memcpy_flushcache) > > Just seems this isn't purely about ARM lacking on an API level (given on > x86_64 Mikulas isn't only using CONFIG_ARCH_HAS_PMEM_API). > > Seems this is more to do with x86_64 having efficient Non-temporal > stores? Yeah, I think you've got that all right. > Anyway, I'm still trying to appreciate the details here before I can > make any forward progress. Making data persistent on x64 requires 3 steps: 1) copy the data into pmem (store instructions) 2) flush the cache lines associated with the data (clflush, clflush_opt, clwb) 3) wait on the flush to complete (sfence) I'm not sure if other architectures require step 3. Mikulas' implementation seems to imply that arm64 doesn't require the fence. The current pmem api provides: memcpy* -- step 1 memcpy_flushcache -- this combines steps 1 and 2 dax_flush -- step 2 wmb* -- step 3 * not strictly part of the pmem api So, if you didn't care about performance, you could write generic code that only used memcpy, dax_flush, and wmb (assuming other arches actually need the wmb). What Mikulas did was to abstract out an API that could be called by generic code that would work optimally on all architectures. This looks like a worth-while addition to the PMEM API, to me. Mikulas, what do you think about refactoring the code as Christoph suggested? Cheers, Jeff -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel