Hello, I have an application, which creates XML data files with attached checksum (uuencoded sha1) files. For example, the application might create a/b/c # XML file a/b/c.chsum # chsum of this file a/b/d/e # another XML file a/b/d/e.chsum # chsum of this file It creates hundreds of those files, some with chsum-files attached and some not. And some of those files are XML, other are binary or simple text. In addition, this application "shuffles" content of those XML files, which results in a lot of clutter when those files are tracked by git. Unfortunately, the application refuses to load those files when the checksum file does not match its contents. Those files seemed to be good candidates for git's clean/smudge-filter. A clean-filter would bring the files into a canonical format (that is, sorted and whitespace-normalized) when the file is to be committed. A smudge-filter would be used to re-calculate the checksum-files on checkout/update. Unfortunately, the description of clean/smudge-filters states: Note that "%f" is the name of the path that is being worked on. Depending on the version that is being filtered, the corresponding file on disk may not exist, or may have different contents. So, smudge and clean commands should not try to access the file on disk, but only act as filters on the content provided to them on standard input. Uh! That means that the content the filter emits on stdout does not nececcerily match the content that is supposed to be in the file when the git command finishes? Thus, simply calculate the checksum and store it in associated checksum file might result in a checksum failure when the application tries to load those files? Am I missing something? Or am I completely off the road? Any help? -- Josef Wolf jw@xxxxxxxxxxxxx