Am 20.07.2009, 15:48 Uhr, schrieb Jakub Narebski <jnareb@xxxxxxxxx>:
"Matthias Andree" <matthias.andree@xxxxxx> writes:
On a more general note, is someone looking into improving the http://
efficiency? Perhaps there are synergies between my plan of (a)
encryption and (b) more efficient "dumb" (http/rsync/...) protocol
use.
There was idea about improving http:// efficiency, but it was via
crating git-over-HTTP aka. "smart" HTTP server, i.e. you would have to
have DAG exposed, like for git:// and ssh://
On the other hand for http:// server need only "dumb" web server, and
additional metadata generated by git-update-server-info. It is client
who does "walking" the DAG, so all data including server metadata can
be encrypted, and decrypted on-the-fly by client.
Fine by me, and seems to be some "minimal disclosure to server".
I don't know though what information leakage you would get from
existence of loose objects and packfiles, and their sizes. Probably
negligible...
Dunno. Given that it's just a collection of object sizes, you can't tell
from the SHA1 if the object in question is tree, tag, blob, or commit.
I'm really not after on-the-fly delta re-compression on the server-side
for crypto stuff. I'm more thinking along the lines of zsync/bsdiff/xdelta
(http://zsync.moria.org.uk/ for the least-known) - but zsync can't work on
encrypted data. Perhaps encrypting the diffs could work, but then what's
the difference to using the http:// and update-server-info related
material and combining that with client-side on-the-fly (de/en)cryption?
--
Matthias Andree
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html