Re: [PATCH] pack-objects: re-validate data we copy from elsewhere.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Fri, 1 Sep 2006, Junio C Hamano wrote:
> 
> But "git repack -a -d", which you now consider almost being
> free, in the recent kernel repository counts 300k objects, and
> reuses 298k objects or so.  That means we expand and recompress
> that many objects, totalling 120MB.

Sure. Do we have data for how expensive that is (ie did you apply the 
patch and time it)?

I'd rather be really safe by default, and then if somebody knows to trust 
their archive, maybe add a "--fast" flag (or even a "core.reliablepack" 
config option) to disable it for people who have backups and think their 
machines are infallible - or have slow CPU's..

For me, performance has always been one of the primary goals, but being 
able to trust the end result has been even _more_ primary. A lot of the 
design has centered around not doing things that are unsafe (eg the whole 
"never ever re-write an object" thing was obviously a big part of the 
design, and a lot of it is about being able to do things quickly _without_ 
having to do slow things like fsync).

			Linus

-- 
VGER BF report: U 0.5
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]