Re: [PATCH] cat-file: reduce write calls for unfiltered blobs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Eric and Peff

On 21/06/2024 07:29, Jeff King wrote:
On Fri, Jun 21, 2024 at 02:04:57AM +0000, Eric Wong wrote:

While the --buffer switch is useful for non-interactive batch use,
buffering doesn't work with processes using request-response loops since
idle times are unpredictable between requests.

For unfiltered blobs, our streaming interface now appends the initial
blob data directly into the scratch buffer used for object info.
Furthermore, the final blob chunk can hold the output delimiter before
making the final write(2).

So we're basically saving one write() per object. I'm not that surprised
you didn't see a huge time improvement. I'd think most of the effort is
spend zlib decompressing the object contents.

If I'm reading the changes correctly then I think we may be saving more than one write far large objects we now seem to allocate a buffer large enough to hold the whole object rather than using a fixed 16KB buffer. The streaming read functions seem to try to fill the whole buffer before returning so I think we'll try and write the whole object at once. I'm not sure that approach is sensible for large blobs due to the extra memory consumption and it does not seem to fit the behavior of the other streaming functions.

If the reason for this change is to reduce the number of read() calls the consumer has to make isn't that going to be limited by the capacity of the pipe? Does git to writing more than PIPE_BUF data at a time really reduce the number of reads on the other side of the pipe?

+
+/*
+ * stdio buffering requires extra data copies, using strbuf
+ * allows us to read_istream directly into a scratch buffer
+ */
+int stream_blob_to_strbuf_fd(int fd, struct strbuf *sb,
+				const struct object_id *oid)
+{

This is a pretty convoluted interface. Did you measure that avoiding
stdio actually provides a noticeable improvement?

Yes this looks nasty especially as the gotcha of the caller being responsible for writing any data left in the buffer when the function returns is undocumented.

Your suggestion below to avoid looking up the object twice sounds like a nicer and hopefully more effective way of trying to improve the performance of "git cat-file".

Best Wishes

Phillip


This function seems to mostly duplicate stream_blob_to_fd(). If we do
want to go this route, it feels like we should be able to implement the
existing function in terms of this one, just by passing in an empty
strbuf?

All that said, I think there's another approach that will yield much
bigger rewards. The call to _get_ the object-info line is separate from
the streaming code. So we end up finding and accessing each object
twice, which is wasteful, especially since most objects aren't big
enough that streaming is useful.

If we could instead tell oid_object_info_extended() to just pass back
the content when it's not huge, we could output it directly. I have a
patch that does this. You can fetch it from https://github.com/peff/git,
on the branch jk/object-info-round-trip. It drops the time to run
"cat-file --batch-all-objects --unordered --batch" on git.git from ~7.1s
to ~6.1s on my machine.

I don't remember all the details of why I didn't polish up the patch. I
think there was some refactoring needed in packed_object_info(), and I
never got around to cleaning it up.

But anyway, that's a much bigger improvement than what you've got here.
It does still require two write() calls, since you'll get the object
contents as a separate buffer. But it might be possible to teach
object_oid_info_extended() to write into a buffer of your choice (so you
could reserve some space at the front to format the metadata into, and
likewise you could reuse the buffer to avoid malloc/free for each).

I don't know that I'll have time to revisit it in the near future, but
if you like the direction feel free to take a look at the patch and see
if you can clean it up. (It was written years ago, but I rebase my
topics forward regularly and merge them into a daily driver, so it
should be in good working order).

-Peff






[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux