On 8/20/19 1:46 PM, Pratyush Yadav wrote:
On 20/08/19 08:21AM, Leam Hall wrote:
Hey all, a newbie could use some help.
We have some code that generates data files, and as a part of our build
process those files are rebuilt to ensure things work. This causes an issue
with branches and merging, as the data files change slightly and dealing
with half a dozen merge conflicts, for files that are in an interim state,
is frustrating. The catch is that when the code goes to the production
state, those files must be in place and current.
We use a release branch, and then fork off that for each issue. Testing, and
file creation, is a part of the pre-merge process. This is what causes the
merge conflicts.
Right now my thought is to put the "final" versions of the files in some
other directory, and put the interim file storage directory in .gitignore.
Is there a better way to do this?
My philosophy with Git is to only track files that I need to generate
the final product. I never track the generated files, because I can
always get to them via the tracked "source" files.
So for example, I was working on a simple parser in Flex and Bison. Flex
and Bison take source files in their syntax, and generate a C file each
that is then compiled and linked to get to the final binary. So instead
of tracking the generated C files, I only tracked the source Flex and
Bison files. My build system can always get me the generated files.
So in your case, what's wrong with just tracking the source files needed
to generate the other files, and then when you want a release binary,
just clone the repo, run your build system, and get the generated files?
What benefit do you get by tracking the generated files?
For internal use I agree with you. However, there's an issue.
The generated files are used by another program's build system, and I
can't guarantee the other build system's build system is built like
ours. It seems easier to provide them the generated files and decouple
their build system layout from ours.