On Thu, Jul 08, 2010 at 09:43:22PM +0300, Artem Bityutskiy wrote: > Hmm, was thinking about this while driving home - the forker approach > has a good resilience property - if it cannot fork - it'll do the stuff > itself. I have a feeling that if something like this to be implemented > with the approach I suggested, we'll end up with similar level of > complexity that we wanted to get rid of... Yes, the lazy starting is what adds the complexity. I think starting it once we have any filesystem mounted on the bdi and stop it once all filesystems are gone is a lot simpler and more elegant. It also solves the other issue wit hall lazy schemes, that is the race betwen dirtying data with no alive thread and the bdi going away. The current code tries to deal with that by splicing the remaining dirty inodes to the default BDI, but that'll just cause memory corruption in most cases, because the BDI is references all over the writeback code and it's gone by the point it'll actually go away. Which makes be believe this race is a rather theoretical one. -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html