On Fri, 1 Apr 2011, Mason wrote: > > > Was this ever accepted into the main line? > > > (It seems to have lived within -mm for a while) > > > > Nope, it never was (as you've by now figured out). I was not able to get the base patchset in that allowed the page cache to work with different page sizes. Reason given was that the functionality that this was used for would never have a chance to be accepted. > > A lot of the rationale for larger block sizes was obviated by the use > > of more advanced file systems, such as ext4, which have other methods > > of dealing with the inefficiencies caused by smaller block sizes. If > > your main complaint with using a 4k block size on the set-top box was > > the mount-time slowness, that can be fixed with the nocheck mount > > option. The rationale for the large block size patchset was to avoid having to handle small 4k chunks both in the hardware and the OS paths. That rationale is still there. The fixes to file systems address meta data issues at a higher level. What you can do on the page level (and what we have done) is basically improve locking but that will at some point no longer be enough since the I/O sizes keep on increasing. At some point we need to be able to handle larger physical chunks. Andrea's work in getting THP committed to the kernel lays some more groundwork for future possibilities. We could f.e. have a base 2M page size for the page cache at some point (some form of my patches to allow the page cache to function with different orders are required though). With larger sizes also come fragmentation issues. We have continually added more means to handle defragmentation do compaction. At some point we should be able to handle larger contiguous blocks and be able to implement larger block sizes. -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html