> -----Original Message----- > From: linux-kernel-owner@xxxxxxxxxxxxxxx [mailto:linux-kernel- > owner@xxxxxxxxxxxxxxx] On Behalf Of Linus Torvalds > Sent: June 01, 2010 11:22 AM > > On Tue, 1 Jun 2010, Jens Axboe wrote: > > > > > Also, the minuimum size of the buffer is 2 pages. Why is it not 1? > > > (Notwithstanding Linus's assertion, a buffer size of 1 page did > give us POSIX compliance in kernels before 2.6.10.) > > > > I'll defer to Linus on that, I remember some emails on that part from > > way back when. As far as I can tell, POSIX wants atomic writes of > > "less than a page size", which would make more sense as "of a page size and > > less". And since it should not be a page size from either side on a > > uni-directional pipe, then 1 page seems enough for that guarantee at > > least. > > Hmm. You guys may well be right that a single slot is sufficient. It > still gives us PIPE_BUF worth of data for writing atomically. I had this > memory that we needed two because of the merging logic (we have that special > case for re-using the previous page, so that we don't use waste of memory > for lots of small writes), but looking at the code there is no reason at > all for me to hav thought so. > > So I don't know why I thought we needed the extra slot, and a single > slot (if anybody really wants slow writes) looks to be fine. > Ok, I have a really dumb/basic question. The reason we are letting users grow the pipe->buffers is to decrease the number of splice-calls. This implies the user has fnctl'd(when he/she wants performance). Can we not have an option where we don't have to 'alloc pipe->buffers' worth pages every single time? As an example look at 'default_file_splice_read'. Is it possible to enhance the existing functionality by defining a new cmd and a flag(in struct pipe_xxx etc) and allowing an user to control that? Something like 'fcntl->F_SETPIPE_SZ_AND_LOCK_PIPE_PAGES'? Does this make sense? regards Chetan Loke -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html