On Sun, Jan 31, 2016 at 12:08 PM, Stas Sergeev <stsp@xxxxxxx> wrote: > 31.01.2016 22:03, Andy Lutomirski пишет: >> Also, consider a use case like yours but with *two* contexts that use >> their own altstack. If you go to context A, enable sigaltstack, get a >> signal, temporarily disable, then swapcontext to B, which tries to >> re-enable its own sigaltstack, then everything gets confusing with >> your patch, because, with your patch, the kernel is only tracking one >> temporarily disabled sigaltstack. > > Of course the good practice is to set the sigaltstack > before creating the contexts. Then the above scenario > should involve switching between 2 signal handlers to get > into troubles. I think the scenario with switching between > 2 signal handlers is very-very unrealistic. Why is it so unrealistic? You're already using swapcontext, which means you're doing something like userspace threads (although I imagine that one of your thread-like things is DOS, but still), and, to me, that suggests that the kernel interface should be agnostic as to how many thread-like thinks are alive. With your patch, where the kernel remembers that you have a temporarily disabled altstack, you can't swap out your context on one kernel thread and swap it in on another, and you can't have two different contexts that get used on the same thread. ISTM it would be simpler if you did: sigaltstack(disable, force) swapcontext() to context using sigaltstack sigaltstack(set new altstack) and then later sigaltstack(disable, force) /* just in case. save old state, too. */ swapcontext() to context not using sigaltstack sigaltstack(set new altstack) If it would be POSIX compliant to allow SS_DISABLE to work even if on the altstack even without a new flag (which is what you're suggesting), then getting rid of the temporary in-kernel state would considerably simplify this patch series. Just skip the -EPERM check in the disable path. --Andy -- To unsubscribe from this list: send the line "unsubscribe linux-api" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html