----- On Nov 23, 2018, at 9:28 AM, Rich Felker dalias@xxxxxxxx wrote: [...] > > Absolutely. As long as it's in libc, implicit destruction will happen. > Actually I think the glibc code shound unconditionally unregister the > rseq address at exit (after blocking signals, so no application code > can run) in case a third-party rseq library was linked and failed to > do so before thread exit (e.g. due to mismatched ref counts) rather > than respecting the reference count, since it knows it's the last > user. This would make potentially-buggy code safer. OK, let me go ahead with a few ideas/questions along that path. Let's say our stated goal is to let the "exit" system call from the glibc thread exit path perform rseq unregistration (without explicit unregistration beforehand). Let's look at what we need. First, we need the TLS area to be valid until the exit system call is invoked by the thread. If glibc defines __rseq_abi as a weak symbol, I'm not entirely sure we can guarantee the IE model if another library gets its own global-dynamic weak symbol elected at execution time. Would it be better to switch to a "strong" symbol for the glibc __rseq_abi rather than weak ? If we rely on implicit unregistration by the exit system call, then we need to be really sure that the __rseq_abi TLS area can be accessed (load and store) from kernel preemption up to the point where exit is invoked. If we have that guarantee with the IE model, then we should be fine. This means the memory area with the __rseq_abi sits can only be re-used after the tid field in the TLB is set to 0 by the exit system call. Looking at allocatestack.c, it looks like the FREE_P () macro does exactly that. With all the above respected, we could rely on implicit rseq unregistration by thread exit rather than do an explicit unregister. We could still need to increment the __rseq_refcount upon thread start however, so we can ensure early adopter libraries won't unregister rseq while glibc is using it. No need to bring the refcount back to 0 in glibc though. There has been presumptions about signals being blocked when the thread exits throughout this email thread. Out of curiosity, what code is responsible for disabling signals in this situation ? Related to this, is it valid to access a IE model TLS variable from a signal handler at _any_ point where the signal handler nests over thread's execution ? This includes early start and just before invoking the exit system call. Thanks, Mathieu -- Mathieu Desnoyers EfficiOS Inc. http://www.efficios.com