Re: [PATCH v4] compat: Fix read() of 2GB and more on Mac OS X

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> writes:

> So it would probably be a great idea to make the filtering code able
> to do things in smaller chunks, but I suspect that the patch to chunk
> up xread/xwrite is the right thing to do anyway.

Yes and yes, but the first yes is a bit tricky for writing things
out, as the recipient of the filter knows the size of the input but
not of the output, and both loose and packed objects needs to record
the length of the object at the very beginning.

Even though our streaming API allows to write new objects directly
to a packfile, for user-specified filters, CRLF, and ident can make
the size of the output unknown before processing all the data, so
the best we could do for these would be to stream to a temporary
file and then copy it again with the length header (undeltified
packed object deflates only the payload, so this "copy" can
literally be a byte-for-byte copy, after writing the in-pack header
out).

As reading from the object store and writing it out to the
filesystem (i.e. entry.c::write_entry() codepath) does not need to
know the output size, convert.c::get_stream_filter() might want to
be told in which direction a filter is asked for and return a
streaming filter back even when those filters that are problematic
for the opposite, writing-to-object-store direction.

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]