> On Wed, 2006-09-06 at 18:27 +0200, Jan Kara wrote: > > > > > > But my debug clearly shows that we are clearing the buffer, while > > > we haven't actually submitted to ll_rw_block() code. (I added "track" > > > flag to bh and set it in journal_commit_transaction() when we add > > > them to wbuf[] and clear it in ll_rw_block() after submit. I checked > > > for this flag in journal_unmap_buffer() while clearing the buffer). > > > Here is what my debug shows: > > > > > > buffer is tracked bh ffff8101686ea850 size 1024 > > > > > > Call Trace: > > > [<ffffffff8020b395>] show_trace+0xb5/0x370 > > > [<ffffffff8020b665>] dump_stack+0x15/0x20 > > > [<ffffffff8030d474>] journal_invalidatepage+0x314/0x3b0 > > I see just journal_invalidatepage() here. That is fine. It calls > > journal_unmap_buffer() which should do nothing return 0. If it does > > not it would be IMO bug.. If the buffer is really unmapped here, in what > > state it is (i.e. which list is it on?). > > > Acutally, I added dump_stack() in journal_unmap_buffer() when it > does clear_buffer_mapped(). gcc must of pulled in the function .. > I will add more debug to track the list bh came from. Ah, ok. My guess is that the buffer is in BJ_Locked list. But we don't write buffers from BJ_Locked list (as they are supposedly already written). What probably happens is: Process 1 Process 2 We start scanning t_sync_datalist. Add buffer to wbuf[]. Locks the buffer (it can be some other writer). Restarts scanning the list Finds buffer is locked -> moves to BJ_Locked list (actually this is BUG as we have not written the buffer yet) journal_unmap_buffer() happens Maybe what you could verify is, that we are moving the buffer added to wbuf[] to BJ_Locked list in the write_out_data: loop. If it really happens, what I've described above, the fix should be different. We shouldn't have moved the buffer to BJ_Locked list... Honza -- Jan Kara <jack@xxxxxxx> SuSE CR Labs - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html