Jeff King <peff@xxxxxxxx> writes: > Just as the previous commit implemented BLOB_NONE, we can support > BLOB_LIMIT filters by looking at the sizes of any blobs in the result > and unsetting their bits as appropriate. This is slightly more expensive > than BLOB_NONE, but still produces a noticeable speedup (these results > are on git.git): > > Test HEAD~2 HEAD > ------------------------------------------------------------------------------------ > 5310.7: rev-list count with blob:none 1.80(1.77+0.02) 0.22(0.20+0.02) -87.8% > 5310.8: rev-list count with blob:limit=1k 1.99(1.96+0.03) 0.29(0.25+0.03) -85.4% That's respectable improvement. packed_object_info() that asks only for the inflated size is quite cheap when the object is a delta, and hopefully we have more deltified blobs than deflated ones. > +static unsigned long get_size_by_pos(struct bitmap_index *bitmap_git, > + uint32_t pos) > +{ > + struct packed_git *pack = bitmap_git->pack; > + unsigned long size; > + struct object_info oi = OBJECT_INFO_INIT; > + > + oi.sizep = &size; > + > + if (pos < pack->num_objects) { > + struct revindex_entry *entry = &pack->revindex[pos]; > + if (packed_object_info(the_repository, pack, > + entry->offset, &oi) < 0) { > + struct object_id oid; > + nth_packed_object_oid(&oid, pack, entry->nr); > + die(_("unable to get size of %s"), oid_to_hex(&oid)); > + } > + } else { > + struct eindex *eindex = &bitmap_git->ext_index; > + struct object *obj = eindex->objects[pos - pack->num_objects]; > + if (oid_object_info_extended(the_repository, &obj->oid, &oi, 0) < 0) > + die(_("unable to get size of %s"), oid_to_hex(&obj->oid)); > + } > + > + return size; > +}