bdowning@xxxxxxxxx (Brian Downing) writes: > (These timings are for the Git pack on Linux/amd64, --window and --depth > both 100. Since /usr/bin/time doesn't seem to report any useful memory > statistics on Linux, I also have a "ps aux" line from when the memory > size looked stable. This was different from run to run but it shows the > two are in the same order of magnitude.) > > Unpatched: > 54.99user 0.18system 0:56.80elapsed 97%CPU (0avgtext+0avgdata 0maxresident)k > 0inputs+0outputs (14major+32417minor)pagefaults 0swaps > bdowning 5290 98.7 4.5 106788 92900 pts/1 R+ 01:26 0:49 git pack-obj > > Patched: > 55.37user 0.19system 0:56.35elapsed 98%CPU (0avgtext+0avgdata 0maxresident)k > 0inputs+0outputs (0major+32249minor)pagefaults 0swaps > bdowning 6086 100 4.5 106880 92996 pts/1 R+ 01:29 0:49 git pack-obj The number of minor faults are comparable (slightly favorable), which is a good sign. > The patched version is actually smaller in both SBCL's and Git's case > (again, --window 100 and --depth 100): > > SBCL: 61696 bytes smaller (13294225-13232529) > Git: 16010 bytes smaller (12690424-12674414) > > I believe the reason for this is that more deltas can get in under the > depth limit. Very sensible indeed. >> It would become worrysome (*BUT* infinitely more interesting) >> once you start talking about a tradeoff between slightly larger >> delta and much shorter delta. Such a tradeoff, if done right, >> would make a lot of sense, but I do not offhand think of a way >> to strike a proper balance between them efficiently. > > Yeah, I was thinking about that too, and came to the same conclusion. > I suspect you'd have to save a /lot/ of delta depth to want to pay any > more I/O, though. That may not be so. Deeper delta also means more I/O (and worse, because they can be from discontiguous areas) plus delta application. - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html