Re: FTP and file transfers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There are still a number of important edge cases for which FTP is superior to any other widely available protocol - wildcard transfers of multiple files, text file transfers between systems with different character encoding conventions, 3rd party mediated transfers (used regularly in the broadcast TV industry where having system A control moving of content from B to C is exactly what is needed).

However FTP does look a bit antiquated by now - what with its support for file and record types that are almost (but not quite) entirely nonexistent on modern systems; a lot of implementations sadly never figured out how to make it work through NAT [*] (or a lot of ALGs in NAT didn't work right); and I have a hard time recommending for widespread use any protocol that doesn't have encryption as an ordinary, widely-implemented feature.

By this time, FTP's main virtue might be that it's been around so long that almost every system in existence supports an implementation of it, which is not the same thing as saying that most implementations of FTP are well-written.   But this is a virtue - being able to effectively transfer files between any pair of systems (including between old and new systems) is useful even given FTP's limitations.   HTTP implementations don't do this in practice at all, sftp/sshfs (while amazingly useful) isn't either standard or nearly so widely deployed, rsync is and always has been a mess, the various file system access protocols (NFS/CIFS/etc.) are very problematic to use.

But I'd really like it if we had something better, that would eventually enjoy the same level of deployment as FTP.   My bet is that sftp/sshfs is the most likely replacement, but it would need to be picked up again and some issues resolved.

Keith


[*]  (Of course, not working through NAT could be regarded as a feature - but sadly not one that has caused people to ditch NAT.)


On 09/29/2017 10:48 AM, Phillip Hallam-Baker wrote:
How strange, I have always found FTP to be an absolute dog of a protocol precisely because it mandates idiot defaults and implementations are required to perform heuristic hacks. I always used to have to transmit files twice because the first time the transfer would be corrupted by the default being 'trash my data in case I am using an IBM machine'.

The separation of the command and data is a good one but separating it over separate TCP/IP streams is an utter disaster for reliability. None of the FTPS clients I have found for Windows is able to reliably transmit a Web site update.

There are much better options, rsync and STFP (SSH FTP) do work well but unfortunately the hosting provider does not support them which is why I am leaving my current provider when I get round to it.

FTP to HISTORIC. The time has come.


On Thu, Sep 28, 2017 at 6:45 PM, John C Klensin <john-ietf@xxxxxxx> wrote:


--On Thursday, September 28, 2017 09:57 +0100 "tom p."
<daedulus@xxxxxxxxxxxxx> wrote:

> The obvious one, which disrupts my work, is the Date Created.
> FTP gives me date which is, or is close to, the creation date
> of the RFC by the RFC Editor.
>
> Internet Explorer makes the Date Created the date on which I
> perform the download, which may be years later and so
> thoroughly misleading (to me).
>
> So standalone FTP every time.

Because FTP, by design, has a command in the protocol to
transmit a data type (primitive version of content type) and a
canonical form for text on the wire, competent implementations
are also capable of delivering files with EOL conventions, and
even character encodings, appropriate to the receiving
environment.  We learned lessons a _very_ long time ago from
EBCDIC and two (or, depending on how you count, at least four)
different encoding forms for ASCII that lead to that feature and
the TYPE command.

Perhaps unfortunately from where we stand today, the community
effectively decided to discard that feature, with a number of
client implementations deciding that binary transfers (in
FTP-speak, TYPE I) were enough that that receiving systems
should just get exact copies of whatever the sending system had
and sort it out themselves, in the process ignoring the FTP
requirement [RFC959, Section 4.1.2, "Representation Type"] that
the default in TYPE is not specified is ASCII non-print.  While
TFTP [RFC1350] has a similar feature ("netascii" mode), AFAIK,
other FTP alternatives for transferring data under different
conditions, including SFTP over SSH and Rsync, simply assume
image copies are fine.

Trying to transfer files containing non-ASCII characters makes
the problem worse because, while the spec isn't explicit about
it, an FTP implementation should presumably fail if TYPE A is
used and the contents of the data file cannot be interpreted as
ASCII.  Attempts to add a "TYPE U" (for "Unicode" or "UTF-8") to
FTP to solve that problem for another canonical text
representation have gotten absolutely no problem, leading me to
presume that the community has completely lost interest in these
issues.

Someone who believed in the existence of character coding and
transmission deities and divine retribution from them might
conclude that the community deserves this BOM mess, along with
UTF-16 on the wire, as a result of not dealing with the issue
effectively in FTP, TFTP, and a variety of other transfer
measures.  I couldn't possibly comment on that.

    john




[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]