Hi, we own a git-repository which size is around 3.4 GiB. Windows: git clone git://pbzcvs01/cis/dev Cloning into 'dev'... remote: Counting objects: 373667, done. remote: Compressing objects: 100% (56072/56072), done. Receiving objects: 100% (373667/373667), 3.46 GiB | 63.15 MiB/s, done. Now while the client is receiving the objects from the remote, we experience that on the remote (a linux system) the allocated memory is linear increasing with the amount of transferred data. While the transfer is at 99% we see following on the remote: Linux: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 10197 root 20 0 4318m 3.4g 3.3g R 40.2 91.6 0:26.97 git 10196 root 20 0 1191m 51m 10m S 19.0 1.3 0:07.24 git-upload-pack After the transmission has completed, the memory get freed istantly. My question is: Is it’s really necessary to allocate all compressed objects in memory and release them not before transmission has completed? I ask, because when there are more clients doing a clone contemporaneously we face out of memory errors. Windows: Windows 7 Enterprise, SP1, 64 bit git version 1.9.5.msysgit.0 Linux: Red Hat Enterprise Linux Server release 6.3 (Santiago), 2.6.32-279.5.1.el6.x86_64 git version 2.3.0 (compiled locally) P.S.: Please forgive me, if similar mail was already posted on the mailing-list. best regards Günther Demetz Würth Phoenix S.r.l. Product Development (CIS) Via Kravogl 4 I-39100 Bolzano Direct: +39 0471 564 061 Fax: +39 0471 564 122 E-Mail: guenther.demetz@xxxxxxxxxxxxxxxxxx Website: www.wuerth-phoenix.com ��.n��������+%������w��{.n��������n�r������&��z�ޗ�zf���h���~����������_��+v���)ߣ�