Nguyễn Thái Ngọc Duy <pclouds@xxxxxxxxx> writes: > It's very very rare that an uncompressd object is larger than 4GB > (partly because Git does not handle those large files very well to > begin with). Let's optimize it for the common case where object size > is smaller than this limit. > > Shrink size field down to 32 bits [1] and one overflow bit. If the size > is too large, we read it back from disk. OK. > Add two compare helpers that can take advantage of the overflow > bit (e.g. if the file is 4GB+, chances are it's already larger than > core.bigFileThreshold and there's no point in comparing the actual > value). I had trouble reading the callers of these helpers. > +static inline int oe_size_less_than(const struct object_entry *e, > + unsigned long limit) > +{ > + if (e->size_valid) > + return e->size_ < limit; > + if (limit > maximum_unsigned_value_of_type(uint32_t)) > + return 1; When size_valid bit is false, that means that the size is larger than 4GB. If "limit" is larger than 4GB, then we do not know anything, no? I'd understand if this "optimization" were if (limit < 4GB) { /* * we know e whose size won't fit in 4GB is larger * than that! */ return 0; } > + return oe_size(e) < limit; > +} Also, don't we want to use uintmax_t throughout the callchain? How would the code in this series work when your ulong is 32-bit? > + > +static inline int oe_size_greater_than(const struct object_entry *e, > + unsigned long limit) > +{ > + if (e->size_valid) > + return e->size_ > limit; > + if (limit <= maximum_unsigned_value_of_type(uint32_t)) > + return 1; > + return oe_size(e) > limit; > +} > + > +static inline void oe_set_size(struct object_entry *e, > + unsigned long size) > +{ > + e->size_ = size; > + e->size_valid = e->size_ == size; > +} > + > #endif