On Tue, 2017-12-12 at 23:32 +0100, Peter Zijlstra wrote: > On Tue, Dec 12, 2017 at 01:10:57PM -0800, Megha Dey wrote: > > On Mon, 2017-11-20 at 12:57 +0100, Peter Zijlstra wrote: > > > On Fri, Nov 17, 2017 at 05:54:05PM -0800, Megha Dey wrote: > > > > + mutex_lock(&bm_counter_mutex); > > > > + for (i = 0; i < BM_MAX_COUNTERS; i++) { > > > > + if (bm_counter_owner[i] == NULL) { > > > > + counter_to_use = i; > > > > + bm_counter_owner[i] = event; > > > > + break; > > > > + } > > > > + } > > > > + mutex_unlock(&bm_counter_mutex); > > > > + > > > > + if (counter_to_use == -1) > > > > + return -EBUSY; > > > > > > > +static struct pmu intel_bm_pmu = { > > > > + .task_ctx_nr = perf_sw_context, > > > > + .attr_groups = intel_bm_attr_groups, > > > > + .event_init = intel_bm_event_init, > > > > + .add = intel_bm_event_add, > > > > + .del = intel_bm_event_del, > > > > +}; > > > > > > Still horrid.. still no. > > > > It seems like perf_invalid_context does not support per task monitoring: > > find_get_context(): > > ctxn = pmu->task_ctx_nr; > > if (ctxn < 0) > > goto errout; > > > > Also, perf_hw_context is to be used only for core PMU, correct? > > > > That leaves us with only perf_sw_context to be used. Not sure if a new > > context needs to be implemented. > > There's work on the way to allow multiple HW PMUs. You'll either have to > wait for that or help in making that happen. What you do not do is > silently hack around it. Could I get a pointer to the code implementing this? I assume that this patch cannot be accepted until there is a way to allow multiple HW PMUs even if appropriate comments are added? -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html