On Thu, Dec 19, 2024 at 11:28:27AM -0300, André Almeida wrote: > Em 17/12/2024 17:31, Peter Zijlstra escreveu: > > On Tue, Dec 17, 2024 at 02:49:55PM -0300, André Almeida wrote: > > > This patch adds a new robust_list() syscall. The current syscall > > > can't be expanded to cover the following use case, so a new one is > > > needed. This new syscall allows users to set multiple robust lists per > > > process and to have either 32bit or 64bit pointers in the list. > > > > Last time a whole list of short comings of the current robust scheme > > were laid bare. I feel we should address all that if we're going to > > create a new scheme. > > > > Are you talking about [1] or is there something else? > > [1] https://lore.kernel.org/lkml/87jzdjxjj8.fsf@xxxxxxxxxxxxxxxxxxxxxxxxx/ Correct, that thread. So at the very least I think we should enforce natural alignment of the robust entry -- this ensures the whole object is always on a single page. This should then allow emulators (like QEMU) to convert things back to native address space. Additionally, I think we can replace the LIST_LIMIT -- whoes purpose is to mitigate the danger of loops -- with the kernel simply destroying the list while it iterates it. That way it cannot be caught in loops, no matter what userspace did. That then leaves the whole munmap() race -- and I'm not really sure what to do about that one. I did outline two option, but they're both quite terrible. The mmap()/munmap() code would need to serialize against list_op_pending without incurring undue overhead in the common case. Ideally we make the whole thing using RSEQ such that list_op_pending becomes atomic vs preemption -- but I've not thought that through.