On Thu, 2004-07-29 at 09:11, Luca Ferrari wrote: > > Here's the command line I use: > > /usr/bin/zip -r -u -y -b /tmp/backup /mnt/disco2//letizia.zip * > > zip error: Out of memory (allocating temp filename) > > and the memory as reported by free before and during the zipping: > > total used free shared buffers cached > Mem: 256624 253180 3444 0 40412 85796 > -/+ buffers/cache: 126972 129652 > Swap: 1020088 16 1020072 > > total used free shared buffers cached > Mem: 256624 254096 2528 0 34604 92080 > -/+ buffers/cache: 127412 129212 > Swap: 1020088 16 1020072 > Nothing unusual here. I had misread your original post as zipping from an NTFS partition. If I hadn't I would have said that the interaction with NFS is probably the root cause of your trouble and you should use a different strategy. As a quick work-a-round you could use tar or rsync to get a copy of the nfs mount on local disk and zip it from there. I've done both for much larger NFS mounts without trouble. A more ideal strategy might be to do the zip on the machine that hosts the nfs mount and transfer the zip file via ftp or rsync. FWIW, my strategy for backup is to have a backup machine running rsync as a service. Any machine that has data to backup does a: rsync -az --delete /local-tree-to-backup/ \ rsync://backup-machine/backup-dir This creates a snapshot of the backup source on the backup machine. I then compress the snapshot and store it, leaving the snapshot in place so that the next time a machine does the rsync maneuver only the diffs are sent, making the process much more efficient on an ongoing basis. The down side is you need disk space to keep the snapshots around, but even if you don't, rsync with the -z option compresses the data before sending it over the network, so it's much better than grabbing the whole thing uncompressed over NFS. CD - : send the line "unsubscribe linux-admin" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html