Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Fri, Mar 18, 2022 at 9:43 AM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>
> On Fri, Mar 18, 2022 at 08:55:32PM +0800, Boqun Feng wrote:
> > On Wed, Mar 16, 2022 at 03:45:48PM -0700, Namhyung Kim wrote:
> > [...]
> > > @@ -209,6 +210,7 @@ static inline int __sched __down_common(struct semaphore *sem, long state,
> > >                                                             long timeout)
> > >  {
> > >     struct semaphore_waiter waiter;
> > > +   bool tracing = false;
> > >
> > >     list_add_tail(&waiter.list, &sem->wait_list);
> > >     waiter.task = current;
> > > @@ -220,18 +222,28 @@ static inline int __sched __down_common(struct semaphore *sem, long state,
> > >             if (unlikely(timeout <= 0))
> > >                     goto timed_out;
> > >             __set_current_state(state);
> > > +           if (!tracing) {
> > > +                   trace_contention_begin(sem, 0);
> >
> > This looks a littl ugly ;-/ Maybe we can rename __down_common() to
> > ___down_common() and implement __down_common() as:
> >
> >       static inline int __sched __down_common(...)
> >       {
> >               int ret;
> >               trace_contention_begin(sem, 0);
> >               ret = ___down_common(...);
> >               trace_contention_end(sem, ret);
> >               return ret;
> >       }
> >
> > Thoughts?
>
> Yeah, that works, except I think he wants a few extra
> __set_current_state()'s like so:

Not anymore, I decided not to because of noise in the task state.

Also I'm considering two tracepoints for the return path to reduce
the buffer size as Mathieu suggested.  Normally it'd return with 0
so we can ignore it in the contention_end.  For non-zero cases,
we can add a new tracepoint to save the return value.

Thanks,
Namhyung

>
> diff --git a/kernel/locking/semaphore.c b/kernel/locking/semaphore.c
> index 9ee381e4d2a4..e2049a7e0ea4 100644
> --- a/kernel/locking/semaphore.c
> +++ b/kernel/locking/semaphore.c
> @@ -205,8 +205,7 @@ struct semaphore_waiter {
>   * constant, and thus optimised away by the compiler.  Likewise the
>   * 'timeout' parameter for the cases without timeouts.
>   */
> -static inline int __sched __down_common(struct semaphore *sem, long state,
> -                                                               long timeout)
> +static __always_inline int ___down_common(struct semaphore *sem, long state, long timeout)
>  {
>         struct semaphore_waiter waiter;
>
> @@ -227,15 +226,28 @@ static inline int __sched __down_common(struct semaphore *sem, long state,
>                         return 0;
>         }
>
> - timed_out:
> +timed_out:
>         list_del(&waiter.list);
>         return -ETIME;
>
> - interrupted:
> +interrupted:
>         list_del(&waiter.list);
>         return -EINTR;
>  }
>
> +static __always_inline int __down_common(struct semaphore *sem, long state, long timeout)
> +{
> +       int ret;
> +
> +       __set_current_state(state);
> +       trace_contention_begin(sem, 0);
> +       ret = ___down_common(sem, state, timeout);
> +       __set_current_state(TASK_RUNNING);
> +       trace_contention_end(sem, ret);
> +
> +       return ret;
> +}
> +
>  static noinline void __sched __down(struct semaphore *sem)
>  {
>         __down_common(sem, TASK_UNINTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT);



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux