Re: For review: seccomp_user_notif(2) manual page

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Oct 25, 2020 at 5:32 PM Michael Kerrisk (man-pages)
<mtk.manpages@xxxxxxxxx> wrote:
> On 10/1/20 4:14 AM, Jann Horn wrote:
> > On Thu, Oct 1, 2020 at 3:52 AM Jann Horn <jannh@xxxxxxxxxx> wrote:
> >> On Thu, Oct 1, 2020 at 1:25 AM Tycho Andersen <tycho@tycho.pizza> wrote:
> >>> On Thu, Oct 01, 2020 at 01:11:33AM +0200, Jann Horn wrote:
> >>>> On Thu, Oct 1, 2020 at 1:03 AM Tycho Andersen <tycho@tycho.pizza> wrote:
> >>>>> On Wed, Sep 30, 2020 at 10:34:51PM +0200, Michael Kerrisk (man-pages) wrote:
> >>>>>> On 9/30/20 5:03 PM, Tycho Andersen wrote:
> >>>>>>> On Wed, Sep 30, 2020 at 01:07:38PM +0200, Michael Kerrisk (man-pages) wrote:
> >>>>>>>>        ┌─────────────────────────────────────────────────────┐
> >>>>>>>>        │FIXME                                                │
> >>>>>>>>        ├─────────────────────────────────────────────────────┤
> >>>>>>>>        │From my experiments,  it  appears  that  if  a  SEC‐ │
> >>>>>>>>        │COMP_IOCTL_NOTIF_RECV   is  done  after  the  target │
> >>>>>>>>        │process terminates, then the ioctl()  simply  blocks │
> >>>>>>>>        │(rather than returning an error to indicate that the │
> >>>>>>>>        │target process no longer exists).                    │
> >>>>>>>
> >>>>>>> Yeah, I think Christian wanted to fix this at some point,
> >>>>>>
> >>>>>> Do you have a pointer that discussion? I could not find it with a
> >>>>>> quick search.
> >>>>>>
> >>>>>>> but it's a
> >>>>>>> bit sticky to do.
> >>>>>>
> >>>>>> Can you say a few words about the nature of the problem?
> >>>>>
> >>>>> I remembered wrong, it's actually in the tree: 99cdb8b9a573 ("seccomp:
> >>>>> notify about unused filter"). So maybe there's a bug here?
> >>>>
> >>>> That thing only notifies on ->poll, it doesn't unblock ioctls; and
> >>>> Michael's sample code uses SECCOMP_IOCTL_NOTIF_RECV to wait. So that
> >>>> commit doesn't have any effect on this kind of usage.
> >>>
> >>> Yes, thanks. And the ones stuck in RECV are waiting on a semaphore so
> >>> we don't have a count of all of them, unfortunately.
> >>>
> >>> We could maybe look inside the wait_list, but that will probably make
> >>> people angry :)
> >>
> >> The easiest way would probably be to open-code the semaphore-ish part,
> >> and let the semaphore and poll share the waitqueue. The current code
> >> kind of mirrors the semaphore's waitqueue in the wqh - open-coding the
> >> entire semaphore would IMO be cleaner than that. And it's not like
> >> semaphore semantics are even a good fit for this code anyway.
> >>
> >> Let's see... if we didn't have the existing UAPI to worry about, I'd
> >> do it as follows (*completely* untested). That way, the ioctl would
> >> block exactly until either there actually is a request to deliver or
> >> there are no more users of the filter. The problem is that if we just
> >> apply this patch, existing users of SECCOMP_IOCTL_NOTIF_RECV that use
> >> an event loop and don't set O_NONBLOCK will be screwed. So we'd
> >> probably also have to add some stupid counter in place of the
> >> semaphore's counter that we can use to preserve the old behavior of
> >> returning -ENOENT once for each cancelled request. :(
> >>
> >> I guess this is a nice point in favor of Michael's usual complaint
> >> that if there are no man pages for a feature by the time the feature
> >> lands upstream, there's a higher chance that the UAPI will suck
> >> forever...
> >
> > And I guess this would be the UAPI-compatible version - not actually
> > as terrible as I thought it might be. Do y'all want this? If so, feel
> > free to either turn this into a proper patch with Co-developed-by, or
> > tell me that I should do it and I'll try to get around to turning it
> > into something proper.
>
> Thanks for taking a shot at this.
>
> I tried applying the patch below to vanilla 5.9.0.
> (There's one typo: s/ENOTCON/ENOTCONN).
>
> It seems not to work though; when I send a signal to my test
> target process that is sleeping waiting for the notification
> response, the process enters the uninterruptible D state.
> Any thoughts?

Ah, yeah, I think I was completely misusing the wait API. I'll go change that.

(Btw, in general, for reports about hangs like that, it can be helpful
to have the contents of /proc/$pid/stack. And for cases where CPUs are
spinning, the relevant part from the output of the "L" sysrq, or
something like that.)

Also, I guess we can probably break this part of UAPI after all, since
the only user of this interface seems to currently be completely
broken in this case anyway? So I think we want the other
implementation without the ->canceled_reqs logic after all.

I'm a bit on the fence now on whether non-blocking mode should use
ENOTCONN or not... I guess if we returned ENOENT even when there are
no more listeners, you'd have to disambiguate through the poll()
revents, which would be kinda ugly?

I'll try to turn this into a proper patch submission...




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux