Re: [PATCH v2] sha1-file: remove OBJECT_INFO_SKIP_CACHED

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 02, 2020 at 01:41:27PM -0800, Junio C Hamano wrote:

> Jonathan Tan <jonathantanmy@xxxxxxxxxx> writes:
> 
> > As a historical note, the function now known as repo_read_object_file()
> > was taught the empty tree in 346245a1bb ("hard-code the empty tree
> > object", 2008-02-13), and the function now known as oid_object_info()
> > was taught the empty tree in c4d9986f5f ("sha1_object_info: examine
> > cached_object store too", 2011-02-07). repo_has_object_file() was never
> > updated, perhaps due to oversight. The flag OBJECT_INFO_SKIP_CACHED,
> > introduced later in dfdd4afcf9 ("sha1_file: teach
> > sha1_object_info_extended more flags", 2017-06-26) and used in
> > e83e71c5e1 ("sha1_file: refactor has_sha1_file_with_flags", 2017-06-26),
> > was introduced to preserve this difference in empty-tree handling, but
> > now it can be removed.
> 
> I am not 100% sure if the implication of this change is safe to
> allow us to say "now it can be".
> 
> The has_object_file() helper wanted to say "no" when given a
> non-existing object registered via the pretend_object_file(),
> presumably because we wanted to allow a use pattern like:
> 
>  - prepare an in-core representation of an object we tentatively
>    expect, but not absolutely sure, to be necessary.
> 
>  - perform operations, using the object data obtained via
>    read_object() API, which is capable of yielding data even for
>    such "pretend" objects (perhaps we are creating a tentative merge
>    parents during a recursive merge).
> 
>  - write out final set of objects by enumerating those that do not
>    really exist yet (via has_object_file() API).
> 
> Teaching about the empty tree to has_object_file() is a good thing
> (especially because we do not necessarily write an empty tree object
> to our repositories), but as a collateral damage of doing so, we
> make such use pattern impossible.  
> 
> It is not a large loss---the third bullet in the above list can just
> be made to unconditionally call write_object_file() without
> filtering with has_object_file() and write_object_file() will apply
> the right optimization anyway, so it probably is OK.

I agree that whoever called pretend_object_file() can be careful and
write out the final set of objects itself via write_object_file(). But
I'd worry a bit about a caller who doesn't necessarily realize that they
need to do that. E.g., imagine we call pretend_object_file() for some
blob oid, expecting it to be read-only. And then in the same process,
some other bit of the code writes out a tree that mentions that blob.
Oops, that tree is now corrupt after we exit the process. And IMHO
neither the pretend-caller nor the tree-writer are to blame; the problem
is that they shared global state they were not expecting.

This is pretty far-fetched given that the only user of
pretend_object_file() is in git-blame right now. But it does give me
pause. Overall, though, I'm more inclined to say that we should be
dropping SKIP_CACHED here and considering pretend_object_file() to be a
bit dangerous (i.e., to keep it in mind if somebody proposes more
calls).

Another point of reference (in favor of Jonathan's patch):

  https://lore.kernel.org/git/20190304174053.GA27497@xxxxxxxxxxxxxxxxxxxxx/

is a bug that would not have happened if this patch had been applied
(there's also some discussion of the greater issue, but nothing that wasn't
already brought up here, I think).

-Peff



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux