Christian Couder <christian.couder@xxxxxxxxx> writes: > > The contents will be stored verbatim without compression and without > > any object header (i.e., the usual "<type> <length>\0") and the file > > could be "ln"ed (or "cow"ed if the underlying filesystem allows it) > > to materialize it in the working tree if needed. > > > > "fsck" needs to be told about how to verify them. Create the object > > header in-core and hash that, followed by the contents of that file, > > and make sure the result matches the <hex-object-name> part of the > > filename, or something like that. > > What happens when they are transferred? Should the remote unpack them > into the same kind of verbatim object? I think that the design space is vast and needs to be discussed, perhaps independently of the local repo case (in which for a start, we could just detect large blobs being added to the index and put them in our new object store instead of loose/packed storage, and make sure that we never repack them). Some concerns during fetch: - Servers would probably want to serve the large blobs via CDN, so we probably need something similar to packfile-uris. Would servers also want to inline these blobs? (If not, we don't need to design this part.) - Would servers be willing to zlib-compress large blobs (into packfile format) if the client doesn't support verbatim objects? And during push: - Clients probably want to be able to inline large blobs when pushing. Should it also be possible to specify the large blob via URI, and if yes, how does the server tell the client what URIs are acceptable?