Re: A naive proposal for preventing loose object explosions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Friday, September 06, 2013 11:19:02 am Junio C Hamano 
wrote:
> mfick@xxxxxxxxxxxxxx writes:
> > Object lookups should likely not get any slower than if
> > repack were not run, and the extra new pack might
> > actually help find some objects quicker.
> 
> In general, having an extra pack, only to keep objects
> that you know are available in other packs, will make
> _all_ object accesses, not just the ones that are
> contained in that extra pack, slower.

My assumption was that if the new pack, with all the 
consolidated reachable objects in it, happens to be searched 
first, it would actually speed things up.  And if it is 
searched last, then the objects weren't in the other packs 
so how could it have made it slower?  It seems this would 
only slow down the missing object path?

But it sounds like all the index files are mmaped up front?  
Then yes, I can see how it would slow things down.  However, 
it is one only extra (hopefully now well optimized) pack.  
My base assumption was that even if it does slow things 
down, it would likely be unmeasurable and a price worth 
paying to avoid an extreme penalty.


> Instead of mmapping all the .idx files for all the
> available packfiles, we could build a table that
> records, for each packed object, from which packfile at
> what offset the data is available to optimize the
> access, but obviously building that in-core table will
> take time, so it may not be a good trade-off to do so at
> runtime (a precomputed super-.idx that we can mmap at
> runtime might be a good way forward if that turns out to
> be the case).
> 
> > Does this sound like it would work?
> 
> Sorry, but it is unclear what problem you are trying to
> solve.

I think you guessed it below, I am trying to prevent loose 
object explosions by keeping unreachable objects around in 
packs (instead of loose) until expiry.  With the current way 
that pack-objects works, this is the best I could come up 
with (I said naive). :(

Today the git-repack calls git pack-objects like this:

git pack-objects --keep-true-parents --honor-pack-keep --
non-empty --all --reflog $args </dev/null "$PACKTMP"

This has no mechanism to place unreachable objects in a 
pack.  If git pack-objects supported an option which 
streamed them to a separate file (as you suggest below), 
that would likely be the main piece needed to avoid the 
heavy-handed approach I was suggesting.  

The problem is how to define the interface for this?  How do 
we get the filename of the new unreachable packfile?  Today 
the name of the new packfile is sent to stdout, would we 
just tack on another name?  That seems like it would break 
some assumptions?  Maybe it would be OK if it only did that 
when an --unreachable flag was added?  Then git-repack could 
be enhanced to understand that flag and the extra filenames 
it outputs?


> Is it that you do not like that "repack -A" ejects
> unreferenced objects and makes it loose, which you may
> have many?

Yes, several times a week we have people pushing the kernel 
to wrong projects, this leads to 4M loose objects. :(  
Without a solution for this regular problem, we are very 
scared to move our repos off of SSDs.  This leads to hour 
plus long fetches.


> The loosen_unused_packed_objects() function used by
> "repack -A" calls the force_object_loose() function
> (actually, it is the sole caller of the function).  If
> you tweak the latter to stream to a single new
> "graveyard" packfile and mark it as "kept until expiry",
> would it solve the issue the same way but with much
> smaller impact?

Yes.
 
> There already is an infrastructure available to open a
> single output packfile and send multiple objects to it
> in bulk-checkin.c, and I am wondering if you can take
> advantage of the framework.  The existing interface to
> it assumes that the object data is coming from a file
> descriptor (the interface was built to support
> bulk-checkin of many objects in an empty repository),
> and it needs refactoring to allow stream_to_pack() to
> take different kind of data sources in the form of
> stateful callback function, though.

That feels beyond what I could currently dedicate the time 
to do.  Like I said, my solution is heavy handed but it felt 
simple enough for me to try.  I can spare the extra disk 
space and I am not convinced the performance hit would be 
bad.  I would, of course, be delighted if someone else were 
to do what you suggest, but I get that it's my itch...

-Martin


-- 
The Qualcomm Innovation Center, Inc. is a member of Code 
Aurora Forum, hosted by The Linux Foundation
 
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]