On Tue, Oct 12, 2021 at 1:59 AM Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote: > On Mon, Oct 11, 2021 at 2:08 PM Catalin Marinas <catalin.marinas@xxxxxxx> wrote: > > > > +#ifdef CONFIG_ARM64_MTE > > +#define FAULT_GRANULE_SIZE (16) > > +#define FAULT_GRANULE_MASK (~(FAULT_GRANULE_SIZE-1)) > > [...] > > > If this looks in the right direction, I'll do some proper patches > > tomorrow. > > Looks fine to me. It's going to be quite expensive and bad for caches, though. > > That said, fault_in_writable() is _supposed_ to all be for the slow > path when things go south and the normal path didn't work out, so I > think it's fine. Let me get back to this; I'm actually not convinced that we need to worry about sub-page-size fault granules in fault_in_pages_readable or fault_in_pages_writeable. >From a filesystem point of view, we can get into trouble when a user-space read or write triggers a page fault while we're holding filesystem locks, and that page fault ends up calling back into the filesystem. To deal with that, we're performing those user-space accesses with page faults disabled. When a page fault would occur, we get back an error instead, and then we try to fault in the offending pages. If a page is resident and we still get a fault trying to access it, trying to fault in the same page again isn't going to help and we have a true error. We're clearly looking at memory at a page granularity; faults at a sub-page level don't matter at this level of abstraction (but they do show similar error behavior). To avoid getting stuck, when it gets a short result or -EFAULT, the filesystem implements the following backoff strategy: first, it tries to fault in a number of pages. When the read or write still doesn't make progress, it scales back and faults in a single page. Finally, when that still doesn't help, it gives up. This strategy is needed for actual page faults, but it also handles sub-page faults appropriately as long as the user-space access functions give sensible results. What am I missing? Thanks, Andreas > I do wonder how the sub-page granularity works. Is it sufficient to > just read from it? Because then a _slightly_ better option might be to > do one write per page (to catch page table writability) and then one > read per "granule" (to catch pointer coloring or cache poisoning > issues)? > > That said, since this is all preparatory to us wanting to write to it > eventually anyway, maybe marking it all dirty in the caches is only > good. > > Linus >