Junio C Hamano <gitster@xxxxxxxxx> writes: > If we are not restricting the characters that can be used in the > custom ID placeholder, we probably should have tests that use > allowed but unusual characters. There is a test in t/t0021-conversion.sh that ensures that we recognise expanded and unexpanded keyword strings using various lines that has placeholder-lookalike on it, which starts like this: # If an expanded ident ever gets into the repository, we want to make sure that # it is collapsed before being expanded again on checkout test_expect_success expanded_in_repo ' { echo "File with expanded keywords" echo "\$Id\$" echo "\$Id:\$" echo "\$Id: 0000000000000000000000000000000000000000 \$" echo "\$Id: NoSpaceAtEnd\$" echo "\$Id:NoSpaceAtFront \$" echo "\$Id:NoSpaceAtEitherEnd\$" echo "\$Id: NoTerminatingSymbol" echo "\$Id: Foreign Commit With Spaces \$" } >expanded-keywords.0 && ... It would be good to have a copy of this test with the placeholder customized to something unusual like " Id:", "Id: $ $", ":", "$ $" etc. to make sure we can parse out the keyword correctly without imposing the letters that can be used in the placeholder. Side note. I of course do not literally mean to "copy" and make the number of tests explode. I think it can be done with a loop to test various custom strings, which may begin like ... for kwd in "Id" " Id:" "Id $" "Id: $ $" ":" "$ $" do cat >expanded-keywords.0 <<-EOF && \$$kwd\$ \$$kwd:\$ \$$kwd: 0000000000000000000000000000000000000000 \$ \$$kwd: NoSpaceAtEnd\$ \$$kwd:NoSpaceAtFront \$ \$$kwd:NoSpaceAtEitherEnd\$ \$$kwd: NoTerminatingSymbol \$$kwd: Foreign Commit With Spaces \$ EOF ... If we can do without restriction, that would be even better, but I do not think it is the end of the world if we found some corner case that a certain customized placeholder can make the syntax ambiguous and we cannot recognise expanded keywords in the working tree files, as it is reasonable to limit the letters we allow in placeholders. One thing we should not do is to tell users that they can use anything as their customized Id string, and then later find such a corner case and tell them now we forbid certain letters. To avoid such a mistake, it is better to start reasonably tight, allowing what we know will never cause problems (like alphanumerics) and forbidding what we know will not be missed in real-world usecases (like whitespaces and punctuations). After we discover that certain things that we initially forbid have real uses in the future, we can loosen the restriction. "Start loose and then tighten" will lead to a regression when viewed from the end-user's workflow. "Start tight and then loosen" will not have that problem.