On 07/10/14 12:53, James Hogan wrote: > On 07/10/14 05:32, David Daney wrote: >> If the kernel automatically allocated the emulation locations, what >> would happen if there were a signal that interrupted the emulation, and >> the signal handler did a longjump to somewhere else? How would we clean >> up the now unused emulation memory allocations? > > AFAICT, Leonid's implementation also has this problem, and that has a > separate stack of emuframes per thread managed completely by the kernel. > > Essentially the kernel doesn't manage the stack, userland does, and > userland can choose to skip over sigframes and emuframes with siglongjmp > without telling the kernel. > > Userland can even switch between contexts (which includes stack) with > setcontext (coroutines etc) which breaks the assumption in Leonid's > patches that emuframes will be completed in reverse order to them being > started, again demonstrating that it is essentially userland that > manages the stack. > > I think any attempt by the kernel to keep track of user stacks (e.g. by > storing a stack pointer along with the emuframe so that unused emuframes > can be discarded later when stack pointer goes high again) will be > foiled by setcontext. > > Hmm, I can't see a way forward that doesn't involve invasive userland > handling & ABI changes other than giving up with non-executable stacks > or limiting permitted instructions in delay slots to those Linux knows > how to emulate directly. Would it work for a signal encountered during branch delay slot emulation (maybe where the PC is pointing at that magic location the kernel uses for emulation) to be treated as a return from emulation, but leaving the user PC pointing to the original branch (with Cause.BD=1 I suppose) prior to handling the signal, so that no more than one emuframe is needed by each thread at a time? Cheers James