On Sun, May 2, 2021 at 11:00 AM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: > > I think we have checks that the hw blocksize is a power-of-two (maybe > just in SCSI? see sd_read_capacity()) Not the hardware block size: our own fs/buffer.c block size. I could imagine some fs corruption that causes a filesystem to ask for something like a 1536-byte block size, and I don't see __bread() for example checking that 'size' is actually a power of 2. And if it isn't a power of two, then I see __find_get_block() and __getblk_slow() doing insane things and possibly even overflowing the allocated page. Some filesystems actually start from the blocksize on disk (xfs looks to do that), and do things like sb->s_blocksize = mp->m_sb.sb_blocksize; sb->s_blocksize_bits = ffs(sb->s_blocksize) - 1; and just imagine what happens if the blocksize on disk is 1536... Now, xfs has a check in the SB validation routine: sbp->sb_blocksize != (1 << sbp->sb_blocklog) and if that fails, it will return -EFSCORRUPTED. But what about other random filesystems? Hopefully everybody checks it. But my point is, that passing in "size" instead of "bits" not only caused this ffs() optimization, it's also a potential source of subtle problems.. (But it goes back to the dark ages, I'm not blaming anybody but myself). Linus