Hmm. Yes, you make a good point. There is plenty of space for tar archiving on the backup system. The elements of what I need, would then be: 1. client process to read all files (locking them in the process), compress them somewhat on the fly (sort of as rsync does), and move them through a TCP connection or FIFO. 2. Program to create an encrypted connection between the client and server, and accept its data on a redirected port, or via a pipe/FIFO; doing the reverse for output on the server. 3. Something very much like tar on the server, to receive, archive, and lastly compress, the data. Afbackup is a pita, so maybe I should look into amanda. Those are the only Linux mainstream textmode backup apps of which I am aware. Luke On Wed, 29 Sep 2004, Janina Sajka wrote: > Ah, yes, I understand your predicament now. I wondered where you were > coming from, because it seems you would be on top of doing this via tar. > But, one does need room to construct the archive. > > Of course, I could offer you space to do it--and will if you'd like. But > that doesn't satisfy the problem directly. > > I wonder if there's a way to set tar up on the target system to take its > input on some particular port and have rsync talk directly to that port? > I know you can do that with rsync, but don't know about tar. > > But, something like that might work, it seems, especially if you first > put up the directory tree so there was the skelaton of a file system to > go to? I guess I don't understand enough of how tar works, so let me > take the app out of the picture. > > Generically, it seems what you want is some kind of daemon listening on > the target machine that will faithfully accept data into a container. > And Isn't this the basic principle underlying network based backups? > Seems there's a way to do this. I just don't know what it is exactly.