On Sun, Apr 1, 2012 at 4:27 AM, Bo Chen <chen@xxxxxxxxxxxxxx> wrote: >> Who decides bigness: >> Bigness seems to be relative to system resources. Does the user crunch the >> numbers to determine if a file is big-file, or does git? If the numbers are >> relative then should git query the system and make the determination? >> Either way, once the system-resources are upgraded and formerly "big-files" >> are no longer considered "big" how is the previous history refactored tot >> behave "non-big-file-like"? Conversely, if the system-resources are >> re-distributed so that formerly non-big files are now relatively big (ie, >> moved from powerful central server login to laptops), how is the history >> refactored to accommodate the newly-relative-bigness? >> > > In common sense, a file of tens of MBs should not be considered as a > big file, but a file of tens of GBs should definitely be considered as > a big file. I think one simple workable solution is to let the user > set the threshold of the big file. We currently have core.bigFileThreshold = 512MB. > One complicate but intelligent > solution is to let git auto-config the threshold by evaluating current > computing resources in the running platform (a physical machine or > just a VM). As to the problem of migrating git in different platforms > which equip with different computing power, the git repo should also > keep tract of under what big file threshold a specific file is > handled. -- Duy -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html