Re: [PATCH] cat-file: reduce write calls for unfiltered blobs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jeff King <peff@xxxxxxxx> wrote:
> On Fri, Jun 21, 2024 at 02:04:57AM +0000, Eric Wong wrote:
> 
> > While the --buffer switch is useful for non-interactive batch use,
> > buffering doesn't work with processes using request-response loops since
> > idle times are unpredictable between requests.
> > 
> > For unfiltered blobs, our streaming interface now appends the initial
> > blob data directly into the scratch buffer used for object info.
> > Furthermore, the final blob chunk can hold the output delimiter before
> > making the final write(2).
> 
> So we're basically saving one write() per object. I'm not that surprised
> you didn't see a huge time improvement. I'd think most of the effort is
> spend zlib decompressing the object contents.

3 writes down to 1 for small objects, actually: header + blob + delimiter

I was mainly annoyed to strace my reader process and find 3 reads,
or even more for non-blocking sockets, worst case (after initial
wakeup via epoll_wait) is:

  read, read (EAGAIN), poll, read, read (EAGAIN), poll, read

But yeah, scheduler behavior is unpredictable on complex modern
systems.

> > +
> > +/*
> > + * stdio buffering requires extra data copies, using strbuf
> > + * allows us to read_istream directly into a scratch buffer
> > + */
> > +int stream_blob_to_strbuf_fd(int fd, struct strbuf *sb,
> > +				const struct object_id *oid)
> > +{
> 
> This is a pretty convoluted interface. Did you measure that avoiding
> stdio actually provides a noticeable improvement?

Yeah, I didn't get any improvements with stdio I could measure;
but my measurements included AGPL Perl code on the reader side.

> This function seems to mostly duplicate stream_blob_to_fd(). If we do
> want to go this route, it feels like we should be able to implement the
> existing function in terms of this one, just by passing in an empty
> strbuf?

I didn't want to stuff too much into the loop given the hole
seeking optimization logic for regular files in
stream_blob_to_fd.

> All that said, I think there's another approach that will yield much
> bigger rewards. The call to _get_ the object-info line is separate from
> the streaming code. So we end up finding and accessing each object
> twice, which is wasteful, especially since most objects aren't big
> enough that streaming is useful.

Yeah, I noticed that and got confused, actually.

> If we could instead tell oid_object_info_extended() to just pass back
> the content when it's not huge, we could output it directly. I have a
> patch that does this. You can fetch it from https://github.com/peff/git,
> on the branch jk/object-info-round-trip. It drops the time to run
> "cat-file --batch-all-objects --unordered --batch" on git.git from ~7.1s
> to ~6.1s on my machine.

Cool, I'll look into it and probably combining the approaches.
Optimizations often have a snowballing effect :)

> But anyway, that's a much bigger improvement than what you've got here.
> It does still require two write() calls, since you'll get the object
> contents as a separate buffer. But it might be possible to teach
> object_oid_info_extended() to write into a buffer of your choice (so you
> could reserve some space at the front to format the metadata into, and
> likewise you could reuse the buffer to avoid malloc/free for each).

Yeah, that sounds like a good idea.

> I don't know that I'll have time to revisit it in the near future, but
> if you like the direction feel free to take a look at the patch and see
> if you can clean it up. (It was written years ago, but I rebase my
> topics forward regularly and merge them into a daily driver, so it
> should be in good working order).

Thanks.  I'll try to take a look at it soon.




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux