On Mon, Aug 25, 2014 at 06:55:51PM +0200, Steffen Prohaska wrote: > It could be handled that way, but we would be back to the original problem > that 32-bit git fails for large files. The convert code path currently > assumes that all data is available in a single buffer at some point to apply > crlf and ident filters. > > If the initial filter, which is assumed to reduce the file size, fails, we > could seek to 0 and read the entire file. But git would then fail for large > files with out-of-memory. We would not gain anything for the use case that > I describe in the commit message's first paragraph. Ah. So the real problem is that we cannot handle _other_ conversions for large files, and we must try to intercept the data before it gets to them. So this is really just helping "reduction" filters. Even if our streaming filter succeeds, it does not help the situation if it did not reduce the large file to a smaller one. It would be nice in the long run to let the other filters stream, too, but that is not a problem we need to solve immediately. Your patch is a strict improvement. Thanks for the explanation; your approach makes a lot more sense to me now. -Peff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html