There are still a number of important edge cases for which FTP is
superior to any other widely available protocol - wildcard
transfers of multiple files, text file transfers between systems
with different character encoding conventions, 3rd party mediated
transfers (used regularly in the broadcast TV industry where
having system A control moving of content from B to C is exactly
what is needed).
However FTP does look a bit antiquated by now - what with its
support for file and record types that are almost (but not quite)
entirely nonexistent on modern systems; a lot of implementations
sadly never figured out how to make it work through NAT [*] (or a
lot of ALGs in NAT didn't work right); and I have a hard time
recommending for widespread use any protocol that doesn't have
encryption as an ordinary, widely-implemented feature.
By this time, FTP's main virtue might be that it's been around so
long that almost every system in existence supports an
implementation of it, which is not the same thing as saying that
most implementations of FTP are well-written. But this is a
virtue - being able to effectively transfer files between any pair
of systems (including between old and new systems) is useful even
given FTP's limitations. HTTP implementations don't do this in
practice at all, sftp/sshfs (while amazingly useful) isn't either
standard or nearly so widely deployed, rsync is and always has
been a mess, the various file system access protocols
(NFS/CIFS/etc.) are very problematic to use.
But I'd really like it if we had something better, that would
eventually enjoy the same level of deployment as FTP. My bet is
that sftp/sshfs is the most likely replacement, but it would need
to be picked up again and some issues resolved.
Keith
[*] (Of course, not working through NAT could be regarded as a
feature - but sadly not one that has caused people to ditch NAT.)
On 09/29/2017 10:48 AM, Phillip
Hallam-Baker wrote:
How strange,
I have always found FTP to be an absolute dog of a protocol
precisely because it mandates idiot defaults and
implementations are required to perform heuristic hacks. I
always used to have to transmit files twice because the first
time the transfer would be corrupted by the default being
'trash my data in case I am using an IBM machine'.
The
separation of the command and data is a good one but
separating it over separate TCP/IP streams is an utter
disaster for reliability. None of the FTPS clients I have
found for Windows is able to reliably transmit a Web site
update.
There are
much better options, rsync and STFP (SSH FTP) do work well but
unfortunately the hosting provider does not support them which
is why I am leaving my current provider when I get round to
it.
FTP to
HISTORIC. The time has come.