Re: [PATCH 10/19] LoongArch: Add signal handling support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Arnd Bergmann <arnd@xxxxxxxx> writes:

> On Fri, Jul 9, 2021 at 11:24 AM Huacai Chen <chenhuacai@xxxxxxxxx> wrote:
>> On Thu, Jul 8, 2021 at 9:30 PM Arnd Bergmann <arnd@xxxxxxxx> wrote:
>> > On Thu, Jul 8, 2021 at 3:04 PM Huacai Chen <chenhuacai@xxxxxxxxx> wrote:
>> > > On Tue, Jul 6, 2021 at 6:17 PM Arnd Bergmann <arnd@xxxxxxxx> wrote:
>> > > > On Tue, Jul 6, 2021 at 6:18 AM Huacai Chen <chenhuacai@xxxxxxxxxxx> wrote:
>> > > > > +
>> > > > > +#ifndef _NSIG
>> > > > > +#define _NSIG          128
>> > > > > +#endif
>> > > >
>> > > > Everything else uses 64 here, except for MIPS.
>> > >
>> > > Once before we also wanted to use 64, but we also want to use LBT to
>> > > execute X86/MIPS/ARM binaries, so we chose the largest value (128).
>> > > Some applications, such as sighold02 in LTP, will fail if _NSIG is not
>> > > big enough.
>> >
>> > Have you tried separating the in-kernel _NSIG from the number used
>> > in the loongarch ABI? This may require a few changes to architecture
>> > independent signal handling code, but I think it would be a cleaner
>> > solution, and make it easier to port existing software without having
>> > to special-case loongarch along with mips.
>>
>> Jun Yi (yili0568@xxxxxxxxx) is my colleague who develops LBT software,
>> he has some questions about how to "separate the in-kernel _NSIG from
>> the number used in the LoongArch ABI".
>
> This ties in with how the foreign syscall implementation is done for LBT,
> and I don't know what you have today, on that side, since it is not part
> of the initial submission.
>
> I think what this means in the end is that any system call that takes
> a sigset_t argument will have to behave differently based on the
> architecture. At the moment, we have
>
> - compat_old_sigset_t (always 32-bit)
> - old_sigset_t (always word size: 32 or 64)
> - sigset_t (always 64, except on mips)
>
> The code dealing with old_sigset_t/compat_old_sigset_t shows how
> a kernel can deal with having different sigset sizes in user space, but
> now we need the same thing for sigset_t as well, if you have a kernel
> that needs to deal with both 128-bit and 64-bit masks in user space.
>
> Most such system calls currently go through set_user_sigmask or
> set_compat_user_sigmask, which only differ on big-endian.
> I would actually like to see these merged together and have a single
> helper checking for in_compat_syscall() to decide whether to do
> the word-swap for 32-bit bit-endian tasks or not, but that's a separate
> discussion (and I suspect that Eric won't like that version, based on
> other discussions we've had).

Reading through get_compat_sigset is the best argument I have ever seen
for getting rid of big endian architectures.  My gut reaction is we
should just sweep all of the big endian craziness into a corner and let
it disappear as the big endian architectures are retired.

Perhaps we generalize the non-compat version of the system calls and
only have a compat version of the system call for the big endian
architectures.

I really hope loongarch and any new architectures added to the tree all
are little endian.

> What I think you need for loongarch though is to change
> set_user_sigmask(), get_compat_sigset() and similar functions to
> behave differently depending on the user space execution context,
> converting the 64-bit masks for loongarch/x86/arm64 tasks into
> 128-bit in-kernel masks, while copying the 128-bit mips masks
> as-is. This also requires changing the sigset_t and _NSIG
> definitions so you get a 64-bit mask in user space, but a 128-bit
> mask in kernel space.
>
> There are multiple ways of achieving this, either by generalizing
> the common code, or by providing an architecture specific
> implementation to replace it for loongarch only. I think you need to
> try out which of those is the most maintainable.

I believe all of the modern versions of the system calls that
take a sigset_t in the kernel also take a sigsetsize.  So the most
straight forward thing to do is to carefully define what happens
to sigsets that are too big or too small when set.

Something like defining that if a sigset is larger than the kernel's
sigset size all of the additional bits must be zero, and if the sigset
is smaller than the kernel's sigset size all of the missing bits
will be set to zero in the kernel's sigset_t.  There may be cases
I am missing bug for SIG_SETMASK, SIG_BLOCK, and SIG_UNBLOCK those
look like the correct definitions.

Another option would be to simply have whatever translates the system
calls in userspace to perform the work of verifying the extra bits in
the bitmap are unused before calling system calls that take a sigset_t
and just ignoring the extra bits.

Eric



[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux