When handling single object reuse from a single-pack bitmap, we first convert the base object's offset into a pack-relative position. Then, we check if the base object's offset occurs after the delta object's offset. If that's the case, then we kick the delta'd object back to the slow path. But there are a couple of oddities here: - However unlikely it is that we'll find a delta/base pair with base_offset >= offset, it doesn't make sense to convert the base's offset to a pack-relative position if we're going to throw out the reuse opportunity anyway. - We "convert" the base object's position into bitmap order by offsetting it by 'pack->bitmap_pos'. But this makes no sense, since single-pack bitmaps have only one pack (by definition), and that pack's first object occurs at bit position 0. Let's clean up this part of 'try_partial_reuse()' by (a) first seeing if we can reuse the delta before converting its base object's offset into a pack position, and (b) avoid an unnecessary conversion with 'pack->bitmap_pos'. (b) allows us to avoid the intermediate 'base_pos' variable, and instead write directly into 'base_bitmap_pos', which we also change here for further clarity. Signed-off-by: Taylor Blau <me@xxxxxxxxxxxx> --- pack-bitmap.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/pack-bitmap.c b/pack-bitmap.c index 5c5c26efe0d..faabc0ba0e9 100644 --- a/pack-bitmap.c +++ b/pack-bitmap.c @@ -2102,11 +2102,6 @@ static int try_partial_reuse(struct bitmap_index *bitmap_git, return 0; } } else { - uint32_t base_pos; - - if (offset_to_pack_pos(pack->p, base_offset, - &base_pos) < 0) - return 0; /* * We assume delta dependencies always point backwards. * This lets us do a single pass, and is basically @@ -2124,7 +2119,9 @@ static int try_partial_reuse(struct bitmap_index *bitmap_git, */ if (base_offset >= offset) return 0; - base_bitmap_pos = pack->bitmap_pos + base_pos; + if (offset_to_pack_pos(pack->p, base_offset, + &base_bitmap_pos) < 0) + return 0; } /* -- 2.47.0.11.g487258bca34