On Tue, May 14, 2024 at 2:28 PM Liam R. Howlett <Liam.Howlett@xxxxxxxxxx> wrote: > > * Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> [240514 13:47]: > > On Mon, 15 Apr 2024 16:35:19 +0000 jeffxu@xxxxxxxxxxxx wrote: > > > > > This patchset proposes a new mseal() syscall for the Linux kernel. > > > > I have not moved this into mm-stable for a 6.10 merge. Mainly because > > of the total lack of Reviewed-by:s and Acked-by:s. > > > > The code appears to be stable enough for a merge. > > > > It's awkward that we're in conference this week, but I ask people to > > give consideration to the desirability of moving mseal() into mainline > > sometime over the next week, please. > > I have looked at this code a fair bit at this point, but I wanted to get > some clarification on the fact that we now have mseal doing checks > upfront while others fail in the middle. > > mbind: > /* > * If any vma in the range got policy other than MPOL_BIND > * or MPOL_PREFERRED_MANY we return error. We don't reset > * the home node for vmas we already updated before. > */ > > > mlock: > mlock will abort (through one path), when it sees a gap in mapped areas, > but will not undo what it did so far. > > mlock_fixup can fail on vma_modify_flags(), but previous vmas are not > updated. This can fail due to allocation failures on the splitting of > VMAs (or failed merging). The allocations could happen before, but this > is more work (but doable, no doubt). > > userfaultfd is... complicated. > > And even munmap() can fail and NOT undo the previous split(s). > mmap.c: > /* > * If userfaultfd_unmap_prep returns an error the vmas > * will remain split, but userland will get a > * highly unexpected error anyway. This is no > * different than the case where the first of the two > * __split_vma fails, but we don't undo the first > * split, despite we could. This is unlikely enough > * failure that it's not worth optimizing it for. > */ > > > But this one is different form the others.. > madvise: > /* > * If the interval [start,end) covers some unmapped address > * ranges, just ignore them, but return -ENOMEM at the end. > * - different from the way of handling in mlock etc. > */ > Thanks. The current mseal patch does up-front checking in two different situations: 1 when applying mseal() Checking for unallocated memory in the given memory range. 2 When checking mseal flag during mprotect/unmap/remap/mmap Checking mseal flag is placed ahead of the main business logic, and treated the same as input arguments check. > Either we are planning to clean this up and do what we can up-front, or > just move the mseal check with the rest. Otherwise we are making a > larger mess with more technical dept for a single user, and I think this > is not an acceptable trade-off. > The sealing use case is different from regular mm API and this didn't create additional technical debt. Please allow me to explain those separately. The main use case and threat model is that an attacker exploits a vulnerability and has arbitrary write access to the process, and can manipulate some arguments to syscalls from some threads. Placing the checking of mseal flag ahead of mprotect main business logic is stricter compared with doing it in-place. It is meant to be harder for the attacker, e.g. blocking the opportunistically attempt of munmap by modifying the size argument. The legit app code won't call mprotect/munmap on sealed memory. It is irrelevant for both precheck and in-place check approaches, from a legit app code point of view. The regular mm API (other than sealing)'s user-case is for legit code, a legit code knows the whole picture of memory mappings and is unlikely to rely on the opportunist return. The user-cases are different, I hope we look into the difference and not force them into one direction. About tech debt, code-wise , placing pre-check ahead of the main business logic of mprotect/munmap APIs, reduces the size of code change, and is easy to carry from release to release, or backporting. But let's compare the alternatives - doing it in-place without precheck. - munmap munmap calls arch_unmap(mm, start, end) ahead of main business logic, the checking of sealing flags would need to be architect specific. In addition, if arch_unmap return fails due to sealing, the code should still proceed, till the main business logic fails again. - mremap/mmap The check of sealing would be scattered, e.g. checking the src address range in-place, dest arrange in-place, unmap in-place, etc. The code is complex and prone to error. -mprotect/madvice Easy to change to in-place. - mseal mseal() check unallocated memory in the given memory range in the pre-check. Easy to change to in-place (same as mprotect) The situation in munmap and mremap/mmap make in-place checks less desirable imo. > Considering the benchmarks that were provided, performance arguments > seem like they are not a concern. > Yes. Performance is not a factor in making a design choice on this. > I want to know if we are planning to sort and move existing checks if we > proceed with this change? > I would argue that we should not change the existing mm code. mseal is new and no backward compatible problem. That is not the case for mprotect and other mm api. E.g. if we were to change mprotect to add a precheck for memory gap, some badly written application might break. The 'atomic' approach is also really difficult to enforce to the whole MM area, mseal() doesn't claim it is atomic. Most regular mm API might go deeper in mm data structure to update page tables and HW, etc. The rollback in handling those error cases, and performance cost. I'm not sure if the benefit is worth the cost. However, atomicity is another topic to discuss unrelated to mm sealing. The current design of mm sealing is due to its use case and practical coding reason. Thanks -Jeff > Thanks, > Liam