Re: [PATCHv2 2/4] coresight: tmc-etf: Fix NULL ptr dereference in tmc_enable_etf_sink_perf()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/23/20 11:54 AM, Peter Zijlstra wrote:
On Fri, Oct 23, 2020 at 11:34:32AM +0100, Suzuki Poulose wrote:
On 10/23/20 10:41 AM, Peter Zijlstra wrote:
On Fri, Oct 23, 2020 at 09:49:53AM +0100, Suzuki Poulose wrote:
On 10/23/20 8:39 AM, Peter Zijlstra wrote:

So then I don't understand the !->owner issue, that only happens when
the task dies, which cannot be concurrent with event creation. Are you

Part of the patch from Sai, fixes this by avoiding the dereferencing
after event creation (by caching it). But the kernel events needs
fixing.

I'm fundamentally failing here. Creating a link to the sink is strictly
event-creation time. Why would you ever need it again later? Later you
already have the sink setup.


Sorry for the lack of clarity here, and you are not alone unless you
have drowned in the CoreSight topologies ;-)

Typically current generation of systems have the following topology :

CPU0
  etm0   \
          \  ________
          /          \
CPU1    /            \
   etm1                \
                        \
                        /-------  sink0
CPU2                  /
   etm2  \            /
          \ ________ /
          /
CPU3    /
   etm3


i.e, Multiple ETMs share a sink. [for the sake of simplicity, I have
used one sink. Even though there could be potential sinks (of different
types), none of them are private to the ETMs. So, in a nutshell, a sink
can be reached by multiple ETMs. ]

Now, for a session :

perf record -e cs_etm/sinkid=sink0/u workload

We create an event per CPU (say eventN, which are scheduled based on the
threads that could execute on the CPU. At this point we have finalized
the sink0, and have allocated necessary buffer for the sink0.

Now, when the threads are scheduled on the CPUs, we start the
appropriate events for the CPUs.

e.g,
  CPU0 sched -> workload:0 - > etm0->event0_start -> Turns all
the components upto sink0, starting the trace collection in the buffer.

Now, if another CPU, CPU1 starts tracing event1 for workload:1 thread,
it will eventually try to turn ON the sink0.Since sink0 is already
active tracing event0, we could allow this to go through and collect
the trace in the *same hardware buffer* (which can be demuxed from the
single AUX record using the TraceID in the packets). Please note that
we do double buffering and hardware buffer is copied only when the sink0
is stopped (see below).

But, if the event scheduled on CPU1 doesn't belong to the above session, but
belongs to different perf session
  (say, perf record -e  cs_etm/sinkid=sink0/u benchmark),

we can't allow this to succeed and mix the trace data in to that of workload
and thus fail the operation.

In a nutshell, since the sinks are shared, we start the sink on the
first event and keeps sharing the sink buffer with any event that
belongs to the same session (using refcounts). The sink is only released
for other sessions, when there are no more events in the session tracing
on any of the ETMs.

I know this is fundamentally a topology issue, but that is not something
we can fix. But the situation is changing and we are starting to see
systems with per-CPU sinks.

Hope this helps.

I think I'm more confused now :-/

Where do we use ->owner after event creation? The moment you create your
eventN you create the link to sink0. That link either succeeds (same
'cookie') or fails.

The event->sink link is established at creation. At event::add(), we
check the sink is free (i.e, it is inactive) or is used by an event
of the same session (this is where the owner field *was* required. But
this is not needed anymore, as we cache the "owner" read pid in the handle->rb->aux_priv for each event and this is compared against the
pid from the handle currently driving the hardware)



If it fails, event creation fails, the end.


On success, we have the sink pointer in our event and we never ever need
to look at ->owner ever again.

Correct, we don't need it after the event creation "one part of the patch" as explained above.


I'm also not seeing why exactly we need ->owner in the first place.

Suppose we make the sink0 device return -EBUSY on open() when it is
active. Then a perf session can open the sink0 device, create perf
events and attach them to the sink0 device using
perf_event_attr::config2. The events will attach to sink0 and increment
its usage count, such that any further open() will fail.

Thats where we are diverging. The sink device doesn't have any fops. It
is all managed by the coresight driver transparent to the perf tool. All
the perf tool does is, specifying which sink to use (btw, we now have
automatic sink selection support which gets rid of this, and uses
the best possible sink e.g, in case of per-CPU sinks).


Once the events are created, the perf tool close()s the sink0 device,
which is now will in-use by the events. No other events can be attached
to it.

Or are you doing the event->sink mapping every time you do: pmu::add()?
That sounds insane.

Sink is already mapped at event create. But yes, the refcount on the
sink is managed at start/stop. Thats when we need to make sure that the
event being scheduled belongs to the same owner as the one already
driving the sink.

That way another session could use the same sink if it is free. i.e

perf record -e cs_etm/@sink0/u --per-thread app1

and

perf record -e cs_etm/@sink0/u --per-thread app2

both can work as long as the sink is not used by the other session.

Suzuki




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [Linux for Sparc]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux