On Thu, Nov 29, 2012 at 6:12 AM, Chris Mason <chris.mason@xxxxxxxxxxxx> wrote: > > Jumping in based on Linus original patch, which is doing something like > this: > > set_blocksize() { > block new calls to writepage, prepare/commit_write > set the block size > unblock > > < --- can race in here and find bad buffers ---> > > sync_blockdev() > kill_bdev() > > < --- now we're safe --- > > } > > We could add a second semaphore and a page_mkwrite call: Yeah, we could be fancy, but the more I think about it, the less I can say I care. After all, the only things that do the whole set_blocksize() thing should be: - filesystems at mount-time - things like loop/md at block device init time. and quite frankly, if there are any *concurrent* writes with either of the above, I really *really* don't think we should care. I mean, seriously. So the _only_ real reason for the locking in the first place is to make sure of internal kernel consistency. We do not want to oops or corrupt memory if people do odd things. But we really *really* don't care if somebody writes to a partition at the same time as somebody else mounts it. Not enough to do extra work to please insane people. It's also worth noting that NONE OF THIS HAS EVER WORKED IN THE PAST. The whole sequence always used to be unlocked. The locking is entirely new. There is certainly not any legacy users that can possibly rely on "I did writes at the same time as the mount with no serialization, and it worked". It never has worked. So I think this is a case of "perfect is the enemy of good". Especially since I think that with the fs/buffer.c approach, we don't actually need any locking at all at higher levels. Linus -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html