On Mon, Oct 27, 2008 at 7:32 PM, Steven Rostedt <rostedt@xxxxxxxxxxx> wrote: > > On Mon, 27 Oct 2008, Mike Snitzer wrote: > >> Please see: e6022603b9aa7d61d20b392e69edcdbbc1789969 >> >> Having a look at the LKML archives this was raised back in 2006: >> http://lkml.org/lkml/2006/6/23/337 >> >> I'm not interested in whether unlikely() actually helps here. >> >> I'm still missing _why_ rsv is mostly NULL at this callsite, as Andrew >> asserted here: >> http://lkml.org/lkml/2006/6/23/400 >> >> And then Steve here: http://lkml.org/lkml/2006/6/24/76 >> Where he said: >> "The problem is that in these cases the pointer is NULL several thousands >> of times for every time it is not NULL (if ever). The non-NULL case is >> where an error occurred or something very special. So I don't see how >> the if here is a problem?" >> >> I'm missing which error or what "something very special" is the >> unlikely() reason for having rsv be NULL. >> >> Looking at the code; ext3_clear_inode() is _the_ place where the >> i_block_alloc_info is cleaned up. In my testing the rsv is _never_ >> NULL if the file was open for writing. Are we saying that reads are >> much more common than writes? May be a reasonable assumption but >> saying as much is very different than what Steve seemed to be eluding >> to... >> >> Anyway, I'd appreciate some clarification here. > > Attached is a patch that I used for counting. > > Here's my results: > # cat /debug/tracing/ftrace_null > 45 > # cat /debug/tracing/ftrace_nonnull > 7 > > Ah, seems that there is cases where it is nonnull more often. Anyway, it > obviously is not a fast path (total of 52). Even if it was all null, it is > not big enough to call for the confusion. What was your workload that resulted in this break-down? AFAIK you'd have 100% in ftrace_nonnull if you simply opened new files and wrote to them. > I'd suggest removing the if conditional, and just calling kfree. Yes, probably. thanks, Mike -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html