On Sun, Feb 21, 2016 at 1:23 PM, Boaz Harrosh <boaz@xxxxxxxxxxxxx> wrote: > On 02/21/2016 10:57 PM, Dan Williams wrote: >> On Sun, Feb 21, 2016 at 12:24 PM, Boaz Harrosh <boaz@xxxxxxxxxxxxx> wrote: >>> On 02/21/2016 09:51 PM, Dan Williams wrote: >>> <> >>>>> Please advise? >>>> >>>> When this came up a couple weeks ago [1], the conclusion I came away >>>> with is >>> >>> I think I saw that talk, no this was not suggested. What was suggested >>> was an FS / mount knob. That would break semantics, this here does not >>> break anything. >> >> No, it was a MAP_DAX mmap flag, similar to this proposal. The >> difference being that MAP_DAX was all or nothing (DAX vs page cache) >> to address MAP_SHARED semantics. >> > > Big difference no? I'm not talking about cached access at all. > >>> >>>> that if an application wants to avoid the overhead of DAX >>>> semantics it needs to use an alternative to DAX access methods. Maybe >>>> a new pmem aware fs like Nova [2], or some other mechanism that >>>> bypasses the semantics that existing applications on top of ext4 and >>>> xfs expect. >>>> >>> >>> But my suggestion does not break any "existing applications" and does >>> not break any semantics of ext4 or xfs. (That I can see) >>> >>> As I said above it perfectly co exists with existing applications and >>> is the best of both worlds. The both applications can write to the >>> same page and will not break any of application's expectation. Old or >>> new. >>> >>> Please point me to where I'm wrong in the code submitted? >>> >>> Besides even an FS like Nova will need a flag per vma like this, >>> it will need to sort out the different type of application. So >>> here is how this is communicated, on the mmap call, how else? >>> And also works for xfs or ext4 >>> >>> Do you not see how this is entirely different then what was >>> proposed? or am I totally missing something? Again please show >>> me how this breaks anything's expectations. >>> >> >> What happens for MAP_SHARED mappings with mixed pmem aware/unaware >> applications? Does MAP_PMEM_AWARE also imply awareness of other >> applications that may be dirtying cachelines without taking >> responsibility for making them persistent? >> > > Sure. please have a look. What happens is that the legacy app > will add the page to the radix tree, come the fsync it will be > flushed. Even though a "new-type" app might fault on the same page > before or after, which did not add it to the radix tree. > So yes, all pages faulted by legacy apps will be flushed. > > I have manually tested all this and it seems to work. Can you see > a theoretical scenario where it would not? I'm worried about the scenario where the pmem aware app assumes that none of the cachelines in its mapping are dirty when it goes to issue pcommit. We'll have two applications with different perceptions of when writes are durable. Maybe it's not a problem in practice, at least current generation x86 cpus flush existing dirty cachelines when performing non-temporal stores. However, it bothers me that there are cpus where a pmem-unaware app could prevent a pmem-aware app from making writes durable. It seems if one app has established a MAP_PMEM_AWARE mapping it needs guarantees that all apps participating in that shared mapping have the same awareness. Another potential issue is that MAP_PMEM_AWARE is not enough on its own. If the filesystem or inode does not support DAX the application needs to assume page cache semantics. At a minimum MAP_PMEM_AWARE requests would need to fail if DAX is not available. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>