On Fri, 6 Apr 2007, Junio C Hamano wrote:
Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> writes:
On Fri, 6 Apr 2007, Dana How wrote:
And I agree with Nico that rollback after a failed write beyond
the boundary may not work correctly, so if we really want to do
this safely and sanely while satisfying Dana's desire to have a
hard limit, I think something like this is needed:
- use "starting offset" to decide when to split;
- introduce a "hard limit", perhaps optionally, to make sure
that the result of writing out a packfile does not overstep
that limit (i.e. the last object written below the "starting
offset limit" might make the pack go over 700MB).
which means you would specify 600 as starting offset limit and
680 (or something like that) as the hard tail offset limit
Again, *in*practice*, for any sane situation, if you want to fit things on
a CD-ROM, just give a limit of 600MB, and I can pretty much guarantee that
you'll see a slop of just a percent or two for any realistic setup. And if
it goes up to 660MB, you'll still fit on any CD.
if you really care the result fits on a CD.
there are going to be cases whereeyou have a fixed size, but would really like
to stream things rather then write a temp file and then send that (one example
of wanting to stream things, but without the size cap is the network pull where
we atart sending things before we've finished figuring out the details of what
we are going to send). with the appropriately sized buffer you could stream to a
CD burner for example, and while this may not make a huge difference in the time
taken to make the CD, it will save you a significant amount of disk I/O and
buffer space that is now still populated by info that's more useful to the user.
David Lang
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html