Re: [PATCH 00/11] writing out a huge blob to working tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, May 15, 2011 at 05:30:20PM -0700, Junio C Hamano wrote:

> Interested parties may want to measure the performance impact of the last
> three patches. The series deliberately ignores core.bigfileThreashold and
> let small and large blobs alike go through the streaming_write_entry()
> codepath, but it _might_ turn out that we would want to use the new code
> only for large-ish blobs.

Hmm.

  $ cd compile/linux-2.6
  $ rm -rf *
  $ time git.v1.7.5 checkout -f
  real    0m4.405s
  user    0m3.592s
  sys     0m0.804s

  $ rm -rf *
  $ time git.jch.streaming checkout -f
  real    0m7.062s
  user    0m5.188s
  sys     0m1.776s

(Actually those times are best-of-5 in each case). So there is
definitely some slow-down for the non-huge case. Bisection points to
your cd36b7b (streaming_write_entry(): use streaming API in
write_entry()).

According to perf, though, it's not the increased writes; the slowdown
is actually from create_pack_revindex, in this call chain:

 create_pack_revindex
 find_pack_revindex
 packed_object_info_detail
 sha1_object_info_extended
 istream_source
 open_istream
 streaming_write_entry

-Peff
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]