Mikkel L. Ellertson wrote:
The problem is that the entire archive is compressed, and not the
[snip] No, it is not. Please speak of what you know, rather than what you conjecture.[1] I ran a test in which I created a tar archive, not compressed, but straight tar format. The size of the file was 3653601280 bytes. I created another tar archive, also not compressed straight tar format. The size of that file was 10240 bytes. It took less than a second to create. I then used "tar A" to append the tiny archive to the larger one, and the run time was 2:20; that is two minutes twenty seconds. During that time, my machine was approx. 80% wait state, and approx. 16% system state, per top. It may be that, due to the format of a tar file, tar is architecturally constrained to do individual seeks for every file contained in the archive, and that doing thousands of seeks in this rather large file is necessarily time consuming. This archive has 34619 files in it. I watched the file system size using df, and found that it did not vary during this process, so I conjecture that this was seeks rather than copies.
If you have the space, you would be better off creating an uncompressed archive of all the files you want to back up, and then compress and split the archive at the end.
That's what I've been doing during this experimentation. [snip] [1] I have no problem with conjectures so long as they are stated as such for exploration, rather than posed as solutions. Mike -- p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);} Oppose globalization and One World Governments like the UN. This message made from 100% recycled bits. You have found the bank of Larn. I can explain it for you, but I can't understand it for you. I speak only for myself, and I am unanimous in that!