On 04/25/2018 04:46 AM, kefu chai wrote: > On Wed, Apr 25, 2018 at 12:59 AM, Adam C. Emerson <aemerson@xxxxxxxxxx> wrote: >> On 25/04/2018, kefu chai wrote: >> [snip] >> I have thought off and on and chatted with a few others about the >> using a binary log, since the runtime cost of all that stringification >> at high log levels is Not Insignificant. > yeah, i recall the discussion in a performance meeting the other day. > so are we going to have a dictionary for each log entry? and for each > log entry, it will contain <index, variable length blob>. we will ship a tool > which embeds a dictionary, in which we can lookup a log entry by its > index to find out <a fmt string, a list of indices into another dictionary>. > in "another" dictionary, it contains the recipes for printing various objects > in Ceph. I'm in favor of this approach. I had done some (quick) analysis and from what I had seen the string copying was the culprit in the logging mechanism, rather than the locking. I replaced the dout() mechanism to use LTTng tracepoints instead of the in-Ceph logging (bypassing the locking and the linked list of log entries) and there wasn't much improvement. But again, this was on a tiny cluster, so more testing would need to be done to really come to that conclusion. I had started using libbabeltrace to write that binary format, which can then be read using babeltrace(1), but from what I've seen, libbabeltrace isn't really fit for fast logging at run-time. We can rewrite this part though, as Kefu is suggesting. The good thing about using our own formats is that we wouldn't need to have a single plain text .log file with all log entries. This will allow us to write per-CPU or per-thread binary log files, and merge them in the binary tool (like many tracers do). It's not a trivial task though. If the gain is too little, we might as well keep the current dout() infrastructure and improve parts of it. Mohamad -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html