>-----Original Message----- >From: Ingo Molnar [mailto:mingo@xxxxxxx] >Sent: Wednesday, February 25, 2009 9:21 AM >bts_hotcpu_handler() is called with irqs disabled, so using mutex_lock() >is a no-no. > >All the BTS codepaths here are atomic (they do not schedule), so using >a spinlock is the right solution. I introduced the lock to protect against a race between bts_trace_start/stop() and bts_hotcpu_handler(). Assuming that the hw-branch-tracer is removed and at the same time a cpu comes online, we might be left with a disabled tracer but still trace that new cpu. I wonder whether a simple get/put_online_cpus() would suffice, i.e. static void bts_trace_start(struct trace_array *tr) { get_online_cpus(); on_each_cpu(bts_trace_start_cpu, NULL, 1); trace_hw_branches_enabled = 1; put_online_cpus(); } > static void trace_bts_prepare(struct trace_iterator *iter) > { >- mutex_lock(&bts_tracer_mutex); >+ spin_lock(&bts_tracer_lock); > > on_each_cpu(trace_bts_cpu, iter->tr, 1); > >- mutex_unlock(&bts_tracer_mutex); >+ spin_unlock(&bts_tracer_lock); > } Whereas start/stop are relatively fast, the above operation is rather expensive. Would it make sense to use schedule_on_each_cpu() instead of on_each_cpu()? regards, markus. ---------------------------------------------------------------------Intel GmbHDornacher Strasse 185622 Feldkirchen/Muenchen GermanySitz der Gesellschaft: Feldkirchen bei MuenchenGeschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes SchwadererRegistergericht: Muenchen HRB 47456 Ust.-IdNr.VAT Registration No.: DE129385895Citibank Frankfurt (BLZ 502 109 00) 600119052 This e-mail and any attachments may contain confidential material forthe sole use of the intended recipient(s). Any review or distributionby others is strictly prohibited. If you are not the intendedrecipient, please contact the sender and delete all copies.��.n��������+%������w��{.n�����{��ة��)��jg��������ݢj����G�������j:+v���w�m������w�������h�����٥