On Fri, Jan 17, 2025 at 11:35 AM enh <enh@xxxxxxxxxx> wrote: > > On Fri, Jan 17, 2025 at 1:20 PM Jeff Xu <jeffxu@xxxxxxxxxxxx> wrote: > > > > On Thu, Jan 16, 2025 at 9:18 AM Pedro Falcato <pedro.falcato@xxxxxxxxx> wrote: > > > > > > On Thu, Jan 16, 2025 at 5:02 PM Benjamin Berg <benjamin@xxxxxxxxxxxxxxxx> wrote: > > > > > > > > Hi Lorenzo, > > > > > > > > On Thu, 2025-01-16 at 15:48 +0000, Lorenzo Stoakes wrote: > > > > > On Wed, Jan 15, 2025 at 12:20:59PM -0800, Jeff Xu wrote: > > > > > > On Wed, Jan 15, 2025 at 11:46 AM Lorenzo Stoakes > > > > > > <lorenzo.stoakes@xxxxxxxxxx> wrote: > > > > > > > > > > [SNIP] > > > > > > > > > > > > > I've made it abundantly clear that this (NACKed) series cannot allow the > > > > > > > kernel to be in a broken state even if a user sets flags to do so. > > > > > > > > > > > > > > This is because users might lack context to make this decision and > > > > > > > incorrectly do so, and now we ship a known-broken kernel. > > > > > > > > > > > > > > You are now suggesting disabling the !CRIU requirement. Which violates my > > > > > > > _requirements_ (not optional features). > > > > > > > > > > > > > Sure, I can add CRIU back. > > > > > > > > > > > > Are you fine with UML and gViso not working under this CONFIG ? > > > > > > UML/gViso doesn't use any KCONFIG like CRIU does. > > > > > > > > > > Yeah this is a concern, wouldn't we be able to catch UML with a flag? > > > > > > > > > > Apologies my fault for maybe not being totally up to date with this, but what > > > > > exactly was the gViso (is it gVisor actually?) > > > > > > > > UML is a separate architecture. It is a Linux kernel running as a > > > > userspace application on top of an unmodified host kernel. > > > > > > > > So really, UML is a mostly weird userspace program for the purpose of > > > > this discussion. And a pretty buggy one too--it got broken by rseq > > > > already. > > > > > > > > What UML now does is: > > > > * Execute a tiny static binary > > > > * map special "stub" code/data pages at the topmost userspace address > > > > (replacing its stack) > > > > * continue execution inside the "stub" pages > > > > * unmap everything below the "stub" pages > > > > * use the unmap'ed area for userspace application mappings > > > > > > > > I believe that the "unmap everything" step will fail with this feature. > > > > > > > > > > > > Now, I am sure one can come up with solutions, e.g.: > > > > 1. Simply print an explanation if the unmap() fails > > > > 2. Find an address that is guaranteed to be below the VDSO and use a > > > > smaller address space for the UML userspace. > > > > 3. Somehow tell the host kernel to not install the VDSO mappings > > > > 4. Add the host VDSO pages as a sealed VMA within UML to guard them > > > > > > > > UML is a bit of a niche and I am not sure it is worth worrying about it > > > > too much. > > > > > > I've been absent from this patch series in general, but this gave me > > > an idea: what if we let userspace seal these mappings itself? Since > > > glibc is already sealing things, it might as well seal these? > > > And then systems that _do_ care about this would set the glibc tunable > > > and deal with the breakage. > > > > > > Is there something seriously wrong with this approach? Besides maybe > > > not having a super easy way to discover these mappings atm, I feel > > > like it would solve all of the policy issues people have been talking > > > about in these threads. > > > > > There are technical difficulties to seal vdso/vvar from the glibc > > side. The dynamic linker lacks vdso/vvar mapping size information, and > > architectural variations for vdso/vvar also means sealing from the > > kernel side is a simpler solution. Adhemerval has more details in case > > clarification is needed from the glibc side. > > as a maintainer of a different linux libc, i've long wanted a "tell me > everything there is to know about this vma" syscall rather than having > to parse /proc/maps... > That will be an interesting mm feature, i.e. query the vma information given an address. ASLR might be a thing to consider, there are sandbox solutions to block the read on /proc/pid/maps, such as landlock. The glibc's dynamic linker gets the mapping size info from the elf header of the .so, during execve() call. In a previous attempt of glibc sealing the vdso, the size of vdso.so (in PT_LOAD) was found to be inaccurate. To make the thing more difficult, the vvar size might not be present, iiuc. > ...but in this special case, is the vdso/vvar size ever anything other > than "one page" in practice? > yes. on x86, the vdso size can be two pages long. > > Additionally, uprobe mapping can't be sealed by the dynamic linker, > > dynamic linker can only apply sealing during execve() and dlopen(), > > uprobe mapping isn't created during those two calls. > > > > -Jeff > > > > > > > -- > > > Pedro