Re: Only track built files for final output?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 20/08/19 02:01PM, Leam Hall wrote:
> On 8/20/19 1:46 PM, Pratyush Yadav wrote:
> > On 20/08/19 08:21AM, Leam Hall wrote:
> > > Hey all, a newbie could use some help.
> > > 
> > > We have some code that generates data files, and as a part of our build
> > > process those files are rebuilt to ensure things work. This causes an issue
> > > with branches and merging, as the data files change slightly and dealing
> > > with half a dozen merge conflicts, for files that are in an interim state,
> > > is frustrating. The catch is that when the code goes to the production
> > > state, those files must be in place and current.
> > > 
> > > We use a release branch, and then fork off that for each issue. Testing, and
> > > file creation, is a part of the pre-merge process. This is what causes the
> > > merge conflicts.
> > > 
> > > Right now my thought is to put the "final" versions of the files in some
> > > other directory, and put the interim file storage directory in .gitignore.
> > > Is there a better way to do this?
> > > 
> > 
> > My philosophy with Git is to only track files that I need to generate
> > the final product. I never track the generated files, because I can
> > always get to them via the tracked "source" files.
> > 
> > So for example, I was working on a simple parser in Flex and Bison. Flex
> > and Bison take source files in their syntax, and generate a C file each
> > that is then compiled and linked to get to the final binary. So instead
> > of tracking the generated C files, I only tracked the source Flex and
> > Bison files. My build system can always get me the generated files.
> > 
> > So in your case, what's wrong with just tracking the source files needed
> > to generate the other files, and then when you want a release binary,
> > just clone the repo, run your build system, and get the generated files?
> > What benefit do you get by tracking the generated files?
> 
> For internal use I agree with you. However, there's an issue.
> 
> The generated files are used by another program's build system, and I can't
> guarantee the other build system's build system is built like ours. It seems
> easier to provide them the generated files and decouple their build system
> layout from ours.

Maybe I don't completely understand your use case, but you can still 
pass off the generated files to the external build system without having 
to track them. Unless the external build system exclusively relies on 
git clones/fetches, how about packaging your release with your files 
generated from your build system in a tarball (or anything else that 
works for you) and pushing them to the external build system?

Assuming you just _have_ to track those files, will always resolving the 
merge conflicts as 'theirs' work?

My guess about your process works is you branch off, make a new feature 
or fix, and then merge those changes to your master. In that case, the 
changes that the feature branch made to your generated files should 
always be the ones that get committed, correct? master's version of the 
generated files should be stale. So your merge conflicts always need to 
be resolved as 'theirs', at least on the generated files. I don't know 
if git-merge supports file-specific merge strategies though, please 
check once. Otherwise, maybe you can write a script that resolves 
conflicts as 'theirs' for the generated files, and lets you figure it 
out manually for the rest. 

I'm just thinking out loud. I don't know how well this will scale. Maybe 
the more experienced folks here will have better ideas.

-- 
Regards,
Pratyush Yadav



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux