Alex Riesen <raa.lkml@xxxxxxxxx> wrote: > Shawn O. Pearce, Wed, Aug 20, 2008 21:44:07 +0200: > > > > We could pick any number for the limit, just so long as its so > > large that the size of the reflog for it to be a valid @{nth} > > request would be something like 1 TB, and thus be highly unlikely. > > > > I was just trying to be cute by using the original commit timestamp > > of Git itself. Perhaps 12936648 (1TB / 83)? > > How about the maximum value the platform's size_t can handle? So on 64 bit platforms we need to wait for another 2.92277266 x10^10 years before we will ever see a seconds-since-epoch which can't possibly be mistaken for a position in the relfog file? > Not because it is "highly unlikely", but because you and me frankly > have no idea exactly how unlikely for example a "12936648 terabytes" is? I have half a brain. Creating 12 million reflog entries would typically require 12 million git-update-ref forks. Anyone who is doing that many since reflog was introduced and has not yet truncated their reflog _really_ should reconsider what they are using it for. Evaluating foo@{12936648} will be _horribly_ expensive. Anyone who is waiting for that result and _cares_ about it would have already started asking on the list for a reflog which is not based on a flat file. If they have already patched their Git to use something else (e.g. gdbm) I have no pity for them when this changes/breaks as they clearly have already patched their Git rather heavily. -- Shawn. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html