On Wed, Mar 05, 2025 at 07:05:23AM -0700, Christoph Hellwig wrote: > Since commit 59bb47985c1d ("mm, sl[aou]b: guarantee natural alignment > for kmalloc(power-of-two)", kmalloc and friends guarantee that power of > two sized allocations are naturally aligned. Limit our use of kmalloc > for buffers to these power of two sizes and remove the fallback to > the page allocator for this case, but keep a check in addition to > trusting the slab allocator to get the alignment right. > > Also refactor the kmalloc path to reuse various calculations for the > size and gfp flags. > > Signed-off-by: Christoph Hellwig <hch@xxxxxx> ..... > @@ -300,18 +300,22 @@ xfs_buf_alloc_backing_mem( > if (xfs_buftarg_is_mem(bp->b_target)) > return xmbuf_map_page(bp); > > - /* > - * For buffers that fit entirely within a single page, first attempt to > - * allocate the memory from the heap to minimise memory usage. If we > - * can't get heap memory for these small buffers, we fall back to using > - * the page allocator. > - */ > - if (size < PAGE_SIZE && xfs_buf_alloc_kmem(new_bp, flags) == 0) > - return 0; > + /* Assure zeroed buffer for non-read cases. */ > + if (!(flags & XBF_READ)) > + gfp_mask |= __GFP_ZERO; We should probably drop this zeroing altogether. The higher level code cannot assume a buffer gained for write has been zeroed by the xfs_trans_get_buf() path contains zeros. e.g. if the buffer was in cache when the get_buf() call occurs, it will contain the whatever was in the buffer, not zeros. This occurs even if the buffer was STALE in cache at the time of the get() operation. Hence callers must always initialise the entire buffer themselves (and they do!), hence allocating zeroed buffers when we are going to zero it ourselves anyway is really unnecessary overhead... This may not matter for 4kB block size filesystems, but it may make a difference for 64kB block size filesystems, especially when we are only doing a get() on the buffer to mark it stale in a transaction and never actually use the contents of it... -Dave. -- Dave Chinner david@xxxxxxxxxxxxx