On Tue, Feb 28, 2023 at 02:15:20PM -0500, Taylor Blau wrote: > On Tue, Feb 28, 2023 at 05:09:01PM +0100, Danny Smit wrote: > > I couldn't find a lot of documentation about the size limitations of > > the .gitattributes file, but I did find the change that seems to have > > introduced it: https://github.com/git/git/commit/27ab4784d5c9e24345b9f5b443609cbe527c51f9 > > The change describes that the file needs to be smaller than 100MB, which it is. > > It's interesting that you can cause `fsck` to produce an error in the > bare repository but not in the non-bare one. Do you have > `fsck.gitattributesLarge` set to anything in the non-bare repository? > Are the affected objects in the `fsck.skipList`? > > Looking at 27ab4784d5, the comment there says: > > if (!buf || size > ATTR_MAX_FILE_SIZE) { > /* > * A missing buffer here is a sign that the caller found the > * blob too gigantic to load into memory. Let's just consider > * that an error. > */ > return report(options, oid, OBJ_BLOB, > FSCK_MSG_GITATTRIBUTES_LARGE, > ".gitattributes too large to parse"); > } > > ...so it's possible that the caller indeed found the blob too large to > load into memory, which would cause us to emit the ".gitattributes too > large to parse" fsck error without a .gitattributes file that actually > exceeds 100 MiB in size. I think that "!buf" case would also trigger if the size exceeded core.bigFileThreshold. It might be worth checking for that, too. -Peff