On Fri, Oct 23, 2020 at 11:34:32AM +0100, Suzuki Poulose wrote: > On 10/23/20 10:41 AM, Peter Zijlstra wrote: > > On Fri, Oct 23, 2020 at 09:49:53AM +0100, Suzuki Poulose wrote: > > > On 10/23/20 8:39 AM, Peter Zijlstra wrote: > > > > > > So then I don't understand the !->owner issue, that only happens when > > > > the task dies, which cannot be concurrent with event creation. Are you > > > > > > Part of the patch from Sai, fixes this by avoiding the dereferencing > > > after event creation (by caching it). But the kernel events needs > > > fixing. > > > > I'm fundamentally failing here. Creating a link to the sink is strictly > > event-creation time. Why would you ever need it again later? Later you > > already have the sink setup. > > > > Sorry for the lack of clarity here, and you are not alone unless you > have drowned in the CoreSight topologies ;-) > > Typically current generation of systems have the following topology : > > CPU0 > etm0 \ > \ ________ > / \ > CPU1 / \ > etm1 \ > \ > /------- sink0 > CPU2 / > etm2 \ / > \ ________ / > / > CPU3 / > etm3 > > > i.e, Multiple ETMs share a sink. [for the sake of simplicity, I have > used one sink. Even though there could be potential sinks (of different > types), none of them are private to the ETMs. So, in a nutshell, a sink > can be reached by multiple ETMs. ] > > Now, for a session : > > perf record -e cs_etm/sinkid=sink0/u workload > > We create an event per CPU (say eventN, which are scheduled based on the > threads that could execute on the CPU. At this point we have finalized > the sink0, and have allocated necessary buffer for the sink0. > > Now, when the threads are scheduled on the CPUs, we start the > appropriate events for the CPUs. > > e.g, > CPU0 sched -> workload:0 - > etm0->event0_start -> Turns all > the components upto sink0, starting the trace collection in the buffer. > > Now, if another CPU, CPU1 starts tracing event1 for workload:1 thread, > it will eventually try to turn ON the sink0.Since sink0 is already > active tracing event0, we could allow this to go through and collect > the trace in the *same hardware buffer* (which can be demuxed from the > single AUX record using the TraceID in the packets). Please note that > we do double buffering and hardware buffer is copied only when the sink0 > is stopped (see below). > > But, if the event scheduled on CPU1 doesn't belong to the above session, but > belongs to different perf session > (say, perf record -e cs_etm/sinkid=sink0/u benchmark), > > we can't allow this to succeed and mix the trace data in to that of workload > and thus fail the operation. > > In a nutshell, since the sinks are shared, we start the sink on the > first event and keeps sharing the sink buffer with any event that > belongs to the same session (using refcounts). The sink is only released > for other sessions, when there are no more events in the session tracing > on any of the ETMs. > > I know this is fundamentally a topology issue, but that is not something > we can fix. But the situation is changing and we are starting to see > systems with per-CPU sinks. > > Hope this helps. I think I'm more confused now :-/ Where do we use ->owner after event creation? The moment you create your eventN you create the link to sink0. That link either succeeds (same 'cookie') or fails. If it fails, event creation fails, the end. On success, we have the sink pointer in our event and we never ever need to look at ->owner ever again. I'm also not seeing why exactly we need ->owner in the first place. Suppose we make the sink0 device return -EBUSY on open() when it is active. Then a perf session can open the sink0 device, create perf events and attach them to the sink0 device using perf_event_attr::config2. The events will attach to sink0 and increment its usage count, such that any further open() will fail. Once the events are created, the perf tool close()s the sink0 device, which is now will in-use by the events. No other events can be attached to it. Or are you doing the event->sink mapping every time you do: pmu::add()? That sounds insane.