If I do this: git init repo && cd repo && echo foo >small && cat small small small small >large && echo '* filter=foo2bar' >.gitattributes && git config filter.foo2bar.clean 'sed s/foo/bar/' && git config core.bigfilethreshold 10 && git add . && echo "===> small" && git cat-file blob :small echo "===> large" && git cat-file blob :large the output I get is: ===> small bar ===> large foo foo foo foo I.e., the clean filter is not applied to the bulk checkin file. Nor can it be easily, because we need to know the size of the file to write the blob header, and we don't know that until we see all of the filter's output. In practice, I don't know if this is a huge deal, as people aren't going to be asking to de-CRLF files that actually cross the 512M bigfilethreshold (OTOH, I seem to recall there are some filters floating around for normalizing gzip'd files, which could plausibly be gigantic). But it seems like the right choice when we see this conflict is not "don't do filters for streaming checkin", but rather "don't do streaming checkin when filters are in use" (because streaming is an optimization, and filters are about correctness). It would be even nicer if filters could play well with bulk checkin, but I think that would involve streaming to a tempfile, checking the size of the file, and then streaming that into an object. Which is better than putting the whole thing in memory if it would involve swapping, but probably worse than doing so if you can possibly fit the whole thing in (because you're doing a ton of extra I/O for the tempfile). Thoughts? Was this intentional, or just overlooked? -Peff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html