Re: [PATCH] fix simple deepening of a repo

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Monday 24 August 2009, Nicolas Pitre wrote:
> On Sun, 23 Aug 2009, Junio C Hamano wrote:
> > Nicolas Pitre <nico@xxxxxxx> writes:
> > > If all refs sent by the remote repo during a fetch are reachable
> > > locally, then no further conversation is performed with the
> > > remote. This check is skipped when the --depth argument is
> > > provided to allow the deepening of a shallow clone which
> > > corresponding remote repo has no changed.
> > >
> > > However, some additional filtering was added in commit c29727d5
> > > to remove those refs which are equal on both sides.  If the
> > > remote repo has not changed, then the list of refs to give the
> > > remote process becomes empty and simply attempting to deepen a
> > > shallow repo always fails.
> > >
> > > Let's stop being smart in that case and simply send the whole
> > > list over when that condition is met.  The remote will do the
> > > right thing anyways.
> > >
> > > Test cases for this issue are also provided.
> > >
> > > Signed-off-by: Nicolas Pitre <nico@xxxxxxx>
> > > ---
> >
> > Thanks.  The fix looks correct (as usual with patches from you).
> >
> > But it makes me wonder if this logic to filter refs is buying us
> > anything.
> >
> > >  	for (rm = refs; rm; rm = rm->next) {
> > > +		nr_refs++;
> > >  		if (rm->peer_ref &&
> > >  		    !hashcmp(rm->peer_ref->old_sha1, rm->old_sha1))
> > >  			continue;
> >
> > 		ALLOC_GROW(heads, nr_heads + 1, nr_alloc);
> > 		heads[nr_heads++] = rm;
> > 	}
> >
> > What is the point of not asking for the refs that we know are the
> > same?
>
> I could see the advantage if the number of refs is really huge. 
> Wasn't some converted repositories producing a lot of branches and/or
> tags significantly slowing down a fetch operation?  Granted that was
> long ago when that issue got "fixed" so the details are fuzzy to me.

I'm converting several CVS repos to Git with ~50 000 refs, so I'm happy 
with any change that can speed things up for repos with many refs.

Right now, my biggest gripe is that a 'git push --mirror' on such a repo 
can easily take ~10 min. even though the actual pack generation and 
transfer only takes a couple of seconds. It seems like it needs ~10 
minutes to generate the list of changed/added/deleted refs...
Unfortunately I haven't had time to look properly into it, yet...


...Johan

-- 
Johan Herland, <johan@xxxxxxxxxxx>
www.herland.net
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]