On Mon, Jul 10, 2023 at 04:58:55PM -0700, Luis Chamberlain wrote: > Curious how this ended up being the heuristic used to shoot for the > MAX_PAGECACHE_ORDER sky first, and then go down 1/2 each time. I don't > see it explained on the commit log but I'm sure there's has to be > some reasonable rationale. From the cover letter, I could guess that > it means the gains of always using the largest folio possible means > an implied latency savings through other means, so the small latencies > spent looking seem to no where compare to the saving in using. But > I rather understand a bit more for the rationale. > > Are there situations where perhaps limitting this initial max preferred > high order folio might be smaller than MAX_PAGECACHE_ORDER? How if not, > how do we know? It's the exact same situation as when filesystems all switched from blocks to extents 10-20 years ago. We _always_ want our allocations to be as big as possible - it reduces metadata overhead, preserves locality, reduces fragmentation - IOW, this is clearly the right thing to do.