On February 20, 2019 2:35:41 AM GMT+01:00, Duy Nguyen <pclouds@xxxxxxxxx> wrote: >On Wed, Feb 20, 2019 at 1:08 AM Junio C Hamano <gitster@xxxxxxxxx> >wrote: >> >> Duy Nguyen <pclouds@xxxxxxxxx> writes: >> >> > On Sun, Feb 17, 2019 at 2:36 AM Ævar Arnfjörð Bjarmason >> > <avarab@xxxxxxxxx> wrote: >> >> >> >> >> >> On Sat, Feb 16 2019, Nguyễn Thái Ngọc Duy wrote: >> >> >> >> [Re-CC some people involved the last time around] >> >> >> >> > A new attribute "precious" is added to indicate that certain >files >> >> > have valuable content and should not be easily discarded even if >they >> >> > are ignored or untracked. >> >> > >> >> > So far there are one part of Git that are made aware of precious >> >> > files: "git clean" will leave precious files alone. >> >> >> >> Thanks for bringing this up again. There were also some patches >recently >> >> to save away clobbered files, do you/anyone else have any end goal >in >> >> mind here that combines this & that, or some other thing I may not >have >> >> kept up with? >> > >> > I assume you mean the clobbering untracked files by merge/checkout. >> > Those files will be backed up [1] if backup-log is implemented. >Even >> > files deleted by "git clean" could be saved but that might go a >little >> > too far. >> >> I agree with Ævar that it is a very good idea to ask what the >> endgame should look like. I would have expected that, with an >> introduction of new "ignored but unexpendable" class of file >> (i.e. "precious" here), operations such as merge and checkout will >> be updated to keep them in situations where we would remove "ignored >> and expendable" files (i.e. "ignored"). And it is perfectly OK if >> the very first introduction of the "precious" support begins only >> with a single operation, such as "clean", as long as the end-goal is >> clear. > >I think the sticking point is how to deal with the surprise factor and >"precious" will not help at all in this aspect. In my mind there are >three classes > > - total expectation, i know i want git to not touch some files, i >tell git so (e.g. with "precious") > > - surprises sometimes, but in known classes. This is the main use >case of backup log, where I may accidentally do "git commit >-amsomething" after carefully preparing the index. Saving overwritten >files by merge/checkout could be done here as an alternative to >"garbage" attribute. > >> I personally do not believe in "backup log"; if we can screw up and >> can fail to stop an operation that must avoid losing info, then we >> can screw up the same way and fail to design and implement "backup" >> to save info before an operation loses it. If we do a good job in >> supporting "precious" in various operations, we can rely less on >> "backup log" and still be safe ;-) > >and this is the third class, something completely unexpected. Yes >backup-log can't help here, but I don't think "precious" can either. >And I have no good proposal for this case. Sorry for going off on a tangent here, but I have had this on my mind for a long time. For cases where merge can lead to loss of a non-ignored untracked file (t7607-merge-overwrite.sh), I have the following proposal: 1. Merge the ORIG_HEAD and MERGE_HEAD commits without touching the index or the work tree. This is where we do rename detection, recursive merge, and content (line-by-line) merge. The result is CHECKOUT_HEAD, a tree with possible merge conflicts. For the switch branch operation CHECKOUT_HEAD is the tree to switch to. The remaining steps are the same for merge and switch branch operations. 2. Merge CHECKOUT_HEAD and the index with ORIG_HEAD as the merge base. The result is the CHECKOUT_INDEX. Do this in order to keep staged changes which are not affected by the merge. Do not do rename detection or content merge. In case of conflict, rollback and error out. 3. Merge CHECKOUT_INDEX with the work tree with the original index as merge base. Do this to simulate the work tree update. Dp not do remame detection or content merge. A conflict means that the checkout operation would touch untracked files or files with unstaged changes. In case of such a conflict, rollback and error out. I believe this algorithm would behave much like the current implementation. But it separates the rename/history/content aspects of the merge algorithm from the checkout operation. It greatly simplifies the implementation of the checkout operation and there are no special cases where we lose files. Implementing step 1 is the tricky part. But it may still be worthwhile because the merge algorithm does not have to worry about staged changes or unstaged changes. The merge algorithm could work on the hierarchical tree structure instead of the flattened index. This makes it trivial to detect directory/file conflicts (no need to do a lookahead when iterating index files). This is also a better fit for detecting directory renames. Maybe this will allow us to focus more on rename detection, such as directory renames or moved functions [*1*]. [*1*] Also: moved files where the original file is replaced with a wrapper for the moved file always fools rename detection because we don't detect renames for files which were not removed.