On Tue, Feb 28, 2017 at 08:33:13PM +0000, brian m. carlson wrote: > On Tue, Feb 28, 2017 at 03:26:34PM -0500, Jeff King wrote: > > Yeah, a lot of brian's patches have been focused around the fixing the > > related size assumptions. We've got GIT_SHA1_HEXSZ which doesn't solve > > the problem, but at least makes it easy to find. And a big improvement > > in the most recent series is a parse_oid() function that lets you parse > > object-ids left-to-right without knowing the size up front. So things > > like: > > > > if (len > 82 && > > !get_sha1_hex(buf, sha1_a) && > > get_sha1_hex(buf + 41, sha1_b)) > > > > becomes more like: > > > > if (parse_oid(p, oid_a, &p) && *p++ == ' ' && > > parse_oid(p, oid_b, &p) && *p++ == '\n') > > What I could do instead of using GIT_SHA1_HEXSZ is use GIT_MAX_HEXSZ for > things that are about allocating enough memory and create a global (or > function) for things that only care about what the current hash size is. > That might be a desirable approach. If other people agree, I can make a > patch to do that. I was going to say "don't worry about it, and focus on converting to constants at all for now". But I guess while you are doing that, it does not hurt to split the MAX_HEXSZ cases out. It will save work in sorting them later. -Peff