Karthik Nayak <karthik.188@xxxxxxxxx> writes: > diff --git a/object.h b/object.h > index 114d45954d..b76830fce1 100644 > --- a/object.h > +++ b/object.h > @@ -62,7 +62,7 @@ void object_array_init(struct object_array *array); > > /* > * object flag allocation: > - * revision.h: 0---------10 15 23------27 > + * revision.h: 0---------10 15 22------28 > * fetch-pack.c: 01 67 > * negotiator/default.c: 2--5 > * walker.c: 0-2 > @@ -82,7 +82,7 @@ void object_array_init(struct object_array *array); > * builtin/show-branch.c: 0-------------------------------------------26 > * builtin/unpack-objects.c: 2021 > */ > -#define FLAG_BITS 28 > +#define FLAG_BITS 29 > > #define TYPE_BITS 3 I am afraid that this is not a good direction to go, given that the way FLAG_BITS is used is like this: /* * The object type is stored in 3 bits. */ struct object { unsigned parsed : 1; unsigned type : TYPE_BITS; unsigned flags : FLAG_BITS; struct object_id oid; }; 28 was there, not as a random number of bits we happen to be using. It was derived by (32 - 3 - 1), i.e. ensure the bitfields above are stored within a single word. sizeof(struct object) is 40 bytes on x86-64, with offsetof(oid) being 4 (i.e. the bitfields fit in a single 4-byte word). If we make FLAG_BITS 29, we will add 4 bytes to the structure and waste 31-bit per each and every in-core objects. Do we really need to allocate a new bit in the object flags, which is already a scarce resource?