On Wed, May 12, 2010 at 09:39:27PM -0400, Josef Bacik wrote: > I just got a suggestion from hpa about instead just moving what BTRFS does into > the generic_perform_write. What btrfs does is allocates a chunk of pages to > cover the entirety of the write, sets everything up, does the copy from user > into the pages, and tears everything down, so essentially what > generic_perform_write does, just with more pages. I could modify > generic_perform_write and the write_begin/write_end aops to do this, where > write_begin will return how many pages it allocated, copy in all of the > userpages into the pages we allocated at once, and then call write_end with the > pages we allocated in write begin. Then I could just make btrfs do > write_being/write_end. So which option seems more palatable? Thanks, Having a more efficient generic write path would be great. I'm not quite sure btrfs gets everything that is needed right already, but getting started on this would be great. And to get back to the original question: adding a ->perform_write is a really dumb idea. It doesn't fit into the structure of the real AOPs at all as it required context dependent locking and so and so on. It would just be a nasty hack around the fact that we still leave too much work to the filesystem in the write path, and for the bad prototype of the ->aio_read/->aio_write methods. For a start generic_segment_checks should move out of the methods and into fs/read_write.c before calling into the methods. To do this fully we'll need to pass down a count of total bytes into ->aio_read and ->aio_write, though. Adding a new generic_write_prep helper for all the boilderplate code in ->aio_write also seems fine to me. -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html