On Thu, Aug 13, 2015 at 9:11 PM, David Miller <davem@xxxxxxxxxxxxx> wrote: > From: James Bottomley <James.Bottomley@xxxxxxxxxxxxxxxxxxxxx> >> At least on some PA architectures, you have to be very careful. >> Improperly managed, multiple aliases will cause the system to crash >> (actually a machine check in the cache chequerboard). For the most >> temperamental systems, we need the cache line flushed and the alias >> mapping ejected from the TLB cache before we access the same page at an >> inequivalent alias. > > Also, I want to mention that on sparc64 we manage the cache aliasing > state in the page struct. > > Until a page is mapped into userspace, we just record the most recent > cpu to store into that page with kernel side mappings. Once the page > ends up being mapped or the cpu doing kernel side stores changes, we > actually perform the cache flush. > > Generally speaking, I think that all actual physical memory the kernel > operates on should have a struct page backing it. So this whole > discussion of operating on physical memory in scatter lists without > backing page structs feels really foreign to me. So the only way for page-less pfns to enter the system is through the ->direct_access() method provided by a pmem device's struct block_device_operations. Architectures that require struct page for cache management to must disable ->direct_access() in this case. If an arch still wants to support pmem+DAX then it needs something like this patchset (feedback welcome) to map pmem pfns: https://lkml.org/lkml/2015/8/12/970 Effectively this would disable ->direct_access() on /dev/pmem0, but permit ->direct_access() on /dev/pmem0m.