> While running generic/103, I observed what looks like memory corruption > and (with slub debugging turned on) a slub redzone warning on i386 when > inactivating an inode with a 64k remote attr value. > > On a v5 filesystem, maximally sized remote attr values require one block > more than 64k worth of space to hold both the remote attribute value > header (64 bytes). On a 4k block filesystem this results in a 68k > buffer; on a 64k block filesystem, this would be a 128k buffer. Note > that even though we'll never use more than 65,600 bytes of this buffer, > XFS_MAX_BLOCKSIZE is 64k. > > This is a problem because the definition of struct xfs_buf_log_format > allows for XFS_MAX_BLOCKSIZE worth of dirty bitmap (64k). On i386 when we > invalidate a remote attribute, xfs_trans_binval zeroes all 68k worth of > the dirty map, writing right off the end of the log item and corrupting > memory. We've gotten away with this on x86_64 for years because the > compiler inserts a u32 padding on the end of struct xfs_buf_log_format. > > Fortunately for us, remote attribute values are written to disk with > xfs_bwrite(), which is to say that they are not logged. Fix the problem > by removing all places where we could end up creating a buffer log item > for a remote attribute value and leave a note explaining why. I think this changelog needs an explanation why using xfs_attr_rmtval_stale which just trylock and checks if the buffers are in core only in xfs_attr3_leaf_freextent is fine. And while the incore part looks sane to me, I think the trylock is wrong and we need to pass the locking flag to xfs_attr_rmtval_stale.