Re: Hey - A Conceptual Simplication....

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 18, 2009 at 01:51:56PM -0500, George Dennie wrote:
> 
> One of the concerns I have with the manual pick-n-commit is that you can
> forget a file or two.

It is more difficult to make this mistake with Git than many others
VCSes, because Git shows the list of files that are changed but not
committed as well as the list of untracked files when you try to commit
something. So, it has never been a real issue for me in practice...

> Consequently, unless you do a clean checkout and test
> of the commit, you don't know that your publishable version even compiles.

If you want to be sure that clean checkout will be compiled, the only
way to guarantee that is to do a clean checkout. Even if you commit all
files except those that are specified in .gitignore, it is not enough to
be sure that a clean checkout will be compiled... But in most cases, you
do not need to do that to be *reasonable* sure that a clean checkout
will be compiled later, and if you have any doubts, you can do a clean
checkout and testing _after_ committing your changes. There is no reason
to be afraid to commit something that may not work if you can amend that
later (until you publish your changes).

> It seems safer to commit the entirety of your work in its working state and
> then do a clean checkout from a dedicated publishable branch and manually
> merge the changes in that, test, and commit.

Maybe I did not understand your words, but I am not sure what is gained
in this way... Clearly there is no reason to publish a work that you
have not tested yet. And no one cares about crap that you keep in your
working tree either... So, a better approach is to commit your changes
as a series of patches that can be reviewed easily, then do all testing
and then publish them for integration with the main development branch.

> 
> It seems the intuitive model is to treat version control as applying to the
> whole document, not parts of it. In this respect the document is defined by
> the IDE, namely the entire solution, warts and all.

This is a very bogus idea. If you want to preserve all warts etc, you
just do backup of the whole disk and now you have a state that can be
compiled any time later (provided that your hardware do not change too
much). In my experience, in most cases when I was not able to compile
an old version were caused not by forgetting to commit something, but
changing in the environment (like new compiler, new libraries, etc).

But when your commits are fine-grained, you can always cherry-pick the
corresponding fix-up and compile this old version if it is necessary.

In my experience, the value of VCS history is the ability to look at it
(sometimes many years later) and understand who wrote this line and why.
Also, nearly all cases when I had to compile some old version were due
to bisecting some tricky bug. In both cases, having fine-grained commits
was crucial to success.

> When you start
> selectively saving parts of the document then you are doing two things,
> versioning and publishing; and at the same time.

No, you don't. Committing some changes and publishing them are two
separated operations in Git, and that it is pretty much fundamental.
Normally, you commit changes in a few separated patches, review them to
make sure that changes match commit messages, do all testing, and only
then you publish them.


Dmitry
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]