Re: encrypted repositories?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 17.07.2009, 21:38 Uhr, schrieb Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>:



On Fri, 17 Jul 2009, Matthias Andree wrote:

Assume you have a repository where you want to work on embargoed information, so that not even system administrators of the server you're pushing to can get
a hold of the cleartext data.

If the server can't ever read it, you're basically limited to just one
story:

 - use rsync-like "stupid" transports to upload and download things.

 - a "smart" git server (eg the native git:// style protocol is not going
   to be possible)

I don't know all its features, apparently it's online recompression - this is no longer going to be available.

and you strictly speaking need no real git changes, because you might as
well just do it by uploading an encrypted tar-file of the .git directory.
And there is literally no upside in doing anything else - any native git
support is almost entirely pointless.

You could make it a _bit_ more useful perhaps by adding some helper
wrappers, probably by just implementing a new transport name (ie instead
of using "rsync://", you'd just use "crypt-tgz://" or something).

Now, that said, there are probably situations where maybe you'd allow the
server to decrypt things _temporarily_, but you don't want to be encrypted
on disk, and no persistent keys on the server, then that would open up a
lot more possibilities.

Of course, that still does require that you trust the server admin to
_some_ degree - anybody who has root would be able to get the keys by
running a debugger on the git upload/download sequence when you do a
upload or download.

Maybe that kind of security is still acceptable to you, though?

No, the server can't be allowed access to the keys or decrypted data.

I'm not sure about the graph, and if I should be concerned. Exposing the DAG might be in order.

It would be ok if the disk storage and the over-the-wire format cannot use delta compression then. It would suffice to just send a set of objects efficiently - and perhaps smaller revisions can be delta-compressed by the clients when pushing.

I admit haven't checked how the current git:// over-the-wire protocol[s] work[s]. I think client-side delta compression may require limiting the graph depths or delta size (when exceeded, the client must send the standalone self-contained object rather than a delta), so that the server can refuse patches when the delta nesting or size gets too deep/big.

I think this would generate the git server to something like a storage device for objects, perhaps with the DAG if exposed.


On a more general note, is someone looking into improving the http:// efficiency? Perhaps there are synergies between my plan of (a) encryption and (b) more efficient "dumb" (http/rsync/...) protocol use.

--
Matthias Andree
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]