On Wed, May 12, 2021 at 04:46:48PM +0100, Steven Price wrote: > On 10/05/2021 19:35, Catalin Marinas wrote: > > On Fri, May 07, 2021 at 07:25:39PM +0100, Catalin Marinas wrote: > > > On Thu, May 06, 2021 at 05:15:25PM +0100, Steven Price wrote: > > > > On 04/05/2021 18:40, Catalin Marinas wrote: > > > > > On Thu, Apr 29, 2021 at 05:06:41PM +0100, Steven Price wrote: > > > > > > Given the changes to set_pte_at() which means that tags are restored from > > > > > > swap even if !PROT_MTE, the only race I can see remaining is the creation of > > > > > > new PROT_MTE mappings. As you mention an attempt to change mappings in the > > > > > > VMM memory space should involve a mmu notifier call which I think serialises > > > > > > this. So the remaining issue is doing this in a separate address space. > > > > > > > > > > > > So I guess the potential problem is: > > > > > > > > > > > > * allocate memory MAP_SHARED but !PROT_MTE > > > > > > * fork() > > > > > > * VM causes a fault in parent address space > > > > > > * child does a mprotect(PROT_MTE) > > > > > > > > > > > > With the last two potentially racing. Sadly I can't see a good way of > > > > > > handling that. [...] > > Options: > > > > 1. Change the mte_sync_tags() code path to set the flag after clearing > > and avoid reading stale tags. We document that mprotect() on > > MAP_SHARED may lead to tag loss. Maybe we can intercept this in the > > arch code and return an error. > > This is the best option I've come up with so far - but it's not a good > one! We can replace the set_bit() with a test_and_set_bit() to catch the > race after it has occurred - but I'm not sure what we can do about it > then (we've already wiped the data). Returning an error doesn't seem > particularly useful at that point, a message in dmesg is about the best > I can come up with. What I meant about intercepting is on something like arch_validate_flags() to prevent VM_SHARED and VM_MTE together but only for mprotect(), not mmap(). However, arch_validate_flags() is currently called on both mmap() and mprotect() paths. We can't do much in set_pte_at() to prevent the race with only a single bit. > > 2. Figure out some other locking in the core code. However, if > > mprotect() in one process can race with a handle_pte_fault() in > > another, on the same shared mapping, it's not trivial. > > filemap_map_pages() would take the page lock before calling > > do_set_pte(), so mprotect() would need the same page lock. > > I can't see how this is going to work without harming the performance of > non-MTE work. Ultimately we're trying to add some sort of locking for > two (mostly) unrelated processes doing page table operations, which will > hurt scalability. Another option is to have an arch callback to force re-faulting on the pte. That means we don't populate it back after the invalidation in the change_protection() path. We could do this only if the new pte is tagged and the page doesn't have PG_mte_tagged. The faulting path takes the page lock IIUC. Well, at least for stage 1, I haven't thought much about stage 2. > > 3. Use another PG_arch_3 bit as a lock to spin on in the arch code (i.e. > > set it around the other PG_arch_* bit setting). > > This is certainly tempting, although sadly the existing > wait_on_page_bit() is sleeping - so this would either be a literal spin, > or we'd need to implement a new non-sleeping wait mechanism. Yeah, it would have to be a custom spinning mechanism, something like: /* lock the page */ while (test_and_set_bit(PG_arch_3, &page->flags)) smp_cond_load_relaxed(&page->flags, !(VAL & PG_arch_3)); ... /* unlock the page */ clear_bit(PG_arch_3, &page->flags); > 4. Sledgehammer locking in mte_sync_page_tags(), add a spinlock only for > the MTE case where we have to sync tags (see below). What the actual > performance impact of this is I've no idea. It could certainly be bad > if there are a lot of pages with MTE enabled, which sadly is exactly > the case if KVM is used with MTE :( > > --->8---- > diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c > index 0d320c060ebe..389ad40256f6 100644 > --- a/arch/arm64/kernel/mte.c > +++ b/arch/arm64/kernel/mte.c > @@ -25,23 +25,33 @@ > u64 gcr_kernel_excl __ro_after_init; > static bool report_fault_once = true; > +static spinlock_t tag_sync_lock; > static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap, > bool pte_is_tagged) > { > pte_t old_pte = READ_ONCE(*ptep); > + if (!is_swap_pte(old_pte) && !pte_is_tagged) > + return; > + > + spin_lock_irqsave(&tag_sync_lock, flags); > + > + /* Recheck with the lock held */ > + if (test_bit(PG_mte_tagged, &page->flags)) > + goto out; Can we skip the lock if the page already has the PG_mte_tagged set? That's assuming that we set the flag only after clearing the tags. The normal case where mprotect() is called on a page already mapped with PROT_MTE would not be affected. -- Catalin _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm