Joey Hess <joey@xxxxxxxxxxx> writes: > There has been discussion before about using clean and smudge filters > with %f to handle big files in git, with the file content stored outside > git somewhere. A simplistic clean filter for large files could look > like this: > > #!/bin/sh > file="$1" > ln -f $file ~/.big/$file > echo $file Isn't this filter already broken if clean request is for a blob contents that is different from what is on the filesystem? The name %f is passed to give the filter a _hint_ on what the path is about (so that the filter can choose to work differently depending on the extension, for example), but the data may or may not come from the filesystem, depending on what is calling the filter, no? Most notably, renormalize_buffer() would call convert_to_git() on a buffer that is internal, possibly quite different from what is in the working tree. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html