Re: Proposed design of fast-export helper

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jonathan,

Jonathan Nieder writes:
> Ramkumar Ramachandra wrote:
> > The other two kinds of `<dataref>` that exporters can produce are:
> > 1. A mark reference (`:<idnum>`) set by a prior `blob` command
> > 2. A full 40-byte SHA-1 of an existing Git blob object.
> 
> The above is very git-specific --- arbitrary foreign vcs-es are
> unlikely to all use 40-byte hashes as <dataref>.  So far I've been
> assuming that a <dataref> is sufficiently "nice" (not containing
> spaces, NULs, quotation marks, or newlines nor starting with a colon).
> 
> It would be better to come up with a more formal rule and document it.

Actually, we need to tighten this <dataref> thing before we build
anything else- it's a nightmare to handle a stream that refers to the
same blob using the mark the first time, the SHA1 the second time, and
the MD5 the third time.  How is our store supposed to know how to
index and retrieve blobs?

Next step: We should find out all the things <dataref> can currently
be, by looking at existing frontend implementation.  Then, we should
come tighten the spec so that it doesn't clobber any of those things.
Also, we should find a way to let the backend know "how" to index/
retrieve a blob -- this is only straightforward in the case of marks.

> I assume the delimited format works as in fast-import's "data" command
> (and only supports blobs ending with LF)?

Yes.  This is actually quite an ugly to support -- We should probably
drop support for this.

Signed-off-by: Ramkumar Ramachandra <artagnon@xxxxxxxxx>

diff --git a/Documentation/git-fast-import.txt b/Documentation/git-fast-import.txt
index 2c2ea12..1fb71f7 100644
--- a/Documentation/git-fast-import.txt
+++ b/Documentation/git-fast-import.txt
@@ -826,8 +826,8 @@ of the next line, even if `<raw>` did not end with an `LF`.
 Delimited format::
 	A delimiter string is used to mark the end of the data.
 	fast-import will compute the length by searching for the delimiter.
-	This format is primarily useful for testing and is not
-	recommended for real data.
+	This format is should only be used for testing; other
+	backends are not required to support this.
 +
 ....
 	'data' SP '<<' <delim> LF


> > fetch_blob_mark and fetch_blob_sha1 can then be used to fetch blobs
> > using their mark or SHA1.  Fetching blobs using their mark should be
> > O(1), while locating the exact SHA1 will require a bisect of sorts:
> > slightly better than O(log (n)).
> 
> http://fanf.livejournal.com/101174.html

Right, but this discussion is now useless, since keys can be just
about anything.

> > How the library works
> 
> I wonder if it would be sensible to make it run as a separate process.
> The upside: writing to and from pipes is easy in a variety of
> programming languages (including the shell), even easier than calling
> C code.  So in particular that would make testing it easier.  But
> performance considerations might outweigh that.

Performance and portability considerations.  Calling semantics will
probably be highly inelegant too, since full-blown bi-directional
communication is necessary.

> I also wonder if it is possible or makes sense to make the API less
> git-specific.  If the buffers were in-memory, something like
> 
> 	set(key, value);
> 	value = get(key);
> 
> would do.  Since they are not, maybe something vaguely like
> 
> 	FILE *f = kvstore_fopen(key, O_WRONLY);
> 	fwrite(value, sz, 1, f);
> 	kvstore_fclose(f);
> 
> 	FILE *f = kvstore_fopen(key, O_RDONLY);
> 	strbuf_fread(&value, SIZE_MAX, f);
> 	kvstore_fclose(f);

I don't like this.  The caller should not have to know about whether
blobs are persisted in-memory or on-disk.  When there are a few small
frequently-used blobs, the key-value might decide to persist them in
memory, and we should allow for this kind of optimization.

> would be something to aim for.  For the getter case, fmemopen is
> portable (in case one wants to just put the value in memory) but
> fopencookie (in case one doesn't) is not, so the idea does not work as
> nicely as one might like.  And it's not quite the right abstraction
> --- for a fast-import backend, I suppose the operations needed are:
> 
>  * get length
>  * dump the value to a caller-specified FILE * or fd
>  * let the caller read the value one chunk or line at a time to
>    transform it (e.g., to escape special characters).
> 
> Is there prior art that this could mimic or reuse (so we can learn
> from others' mistakes and make sure the API feels familiar)?

Kyoto Cabinet, or just any key-value store for that matter.  All prior
discussion related to SHA1 is useless then, because the key can be
just about anything: the only option we have is to implement the
hashtable as a data structure with a very high fanout value like a B+
tree.  Obviously, this will be less efficient than a store which keys
everything using a fixed 20-byte SHA1 -- how much speed are we willing
to trade off for the sake of this simplicity?

-- Ram
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]