On Wed, Jun 08, 2022 at 10:57:22AM +0100, Luís Henriques wrote: > On Wed, Jun 08, 2022 at 10:23:15AM +1000, Dave Chinner wrote: > > On Tue, Jun 07, 2022 at 04:15:13PM +0100, Luís Henriques wrote: > > > CephFS doesn't had a maximum xattr size. Instead, it imposes a maximum > > > size for the full set of an inode's xattrs names+values, which by default > > > is 64K but it can be changed by a cluster admin. > > > > > > Test generic/486 started to fail after fixing a ceph bug where this limit > > > wasn't being imposed. Adjust dynamically the size of the xattr being set > > > if the error returned is -ENOSPC. > > > > Ah, this shouldn't be getting anywhere near the 64kB limit unless > > ceph is telling userspace it's block size is > 64kB: > > > > size = sbuf.st_blksize * 3 / 4; > > ..... > > size = MIN(size, XATTR_SIZE_MAX); > > Yep, that's exactly what is happening. The cephfs kernel client reports > here the value that is being used for ceph "object size", which defaults > to 4M. Hence, we'll set size to XATTR_SIZE_MAX. Yikes. This is known to break random applications that size buffers based on a multiple of sbuf.st_blksize and assume that it is going to be roughly 4kB. e.g. size a buffer at 1024 * sbuf.st_blksize, expecting to get a ~4MB buffer, and instead it tries to allocate a 4GB buffer.... > > Regardless, the correct thing to do here is pass the max supported > > xattr size from the command line (because fstests knows what that it > > for each filesystem type) rather than hard coding > > XATTR_SIZE_MAX in the test. > > OK, makes sense. But then, for the ceph case, it becomes messy because we > also need to know the attribute name to compute the maximum size. I guess > we'll need an extra argument for that too. Just pass in a size for ceph that has enough spare space for the attribute names in it, like for g/020. Don't make it more complex than it needs to be. -Dave. -- Dave Chinner david@xxxxxxxxxxxxx