Re: FTP and file transfers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Name-calling doesn't help anyone's understanding of ftp's limitations,
although it does hint at the degree of frustration that the protocol has
caused and, thankfully, in increasingly rare cases, still causes.  It
was never fun to have to deal with ftp from an operational point of
view, and there is nothing about it that I miss.  It caused grief on the
internet, in no short supply.

We've moved in every respect, and have learned a good deal from ftp's
deficiencies.  We have protocols which will handle inline control
multiplexing to cater with both command and data streams; we have
encryption, command pipelining, relatively consistent name space
management, character set management, remote filesystem mirroring,
incremental transfers, remote filesystems and plenty more.  They don't
always work together in the same protocol suite, or when they do work,
they mightn't work as consistently as expected, but for sure they are a
vast improvement on what we once used.

There are plenty of utilities which will handle http/https file transfer
just fine.  In the rare cases where http doesn't work, it's usually due
to the data stream provider attempting to restrict access to their
content and succeeding equally at that and aggrieving their users.  Most
of the problems I've had with http downloads in the last many years have
been related to layer 9, not layer 4.

Inline data stream translation causes more problems than it fixes, e.g.
data integrity validation, and even if we might like the idea of ebcdic
or other things like vms structured records from an idealogical, or even
an archaeological point of view, they became roadkill on the highway to
device inter-compatibility.  This wasn't a bad thing either, although we
can still learn lessons about structured data streams and non ascii
encoding and use those ideas in places where they make sense.

Some people might even claim that we've ended up with a rich, diverse
and useful Internet, even if it didn't match what was originally
envisaged in the 1980s and 1990s :-)

All that said, this is a mighty fine rathole that we crawled into.
Unless someone wants to write an ID which formally deprecates FTP, which
I would enthusiastically support, a topic stack pop might be appropriate.

Nick

John C Klensin wrote:
> Sorry... hit "send" a bit too soon.  I meant to add that it
> might be worth looking at some of those older protocols to see
> whether, if their basic target functionality is still useful,
> even if only for an identifiable niche, if known problems
> (either in the original design or due to developments since they
> were designed) can be fixed by new features.
> 
> Name-calling, e.g., "FTP is awful", doesn't help with improving
> understanding.  I happen to find HTTP/HTTPS fairly awful when
> all I need to do is transfer a file that I have no desire to
> display or otherwise render in real time.  Some browsers
> mitigate that particular awfulness somewhat by providing a
> "download this URL" option, but, because the idea of
> interpreting a location, finding an object, interpreting it, and
> retrieving it for local interpretation and display seems
> integral to the web design, that still doesn't do a good job of
> addressing cases in which the desire is to precisely point to
> something in a remote file system, retrieve it (optionally using
> a network-defined canonical form), then then get it into the
> local file system in appropriate form.    Those concepts are as
> important to FTP --and native to it-- as the separate control
> and data streams.  We could debate how often they are needed
> today, but the need is clearly above zero.
> 
> Even with the two-stream model, I think it is important to
> recognize what that does do, independent of aesthetic arguments
> about separate or combined streams.  Later implementations and
> even versions of the protocol notwithstanding, the two streams
> were intended to be completely asynchronous, allowing not only
> "passive" (including third host) transfers but checking status
> or even changing or aborting transfers in progress without
> tearing down the connection or leaving the host at one end
> unsure whether the connection failed or the other one
> deliberately aborted.  It allows starting (but still
> controlling) several transfers from the same control connection.
> That allows several things that we have never exploited or
> included in the published protocol, but that doesn't mean the
> possibilities are absent.  
> 
> Similarly, getting FTP to meet today's norms for authentication,
> authorization, and privacy would not be straightforward.  Those
> capabilities are certainly not part of the 1985 (or 1971)
> specifications of the protocol.  But we certainly know enough to
> be able to do those things and maybe even to exploit some
> advantages of doing key exchange out of band wrt the data
> stream.  IMO, it would be far more useful to have those
> discussions rather than saying "FTP is awful" and, by
> implication, "anyone who thinks they need or want the features
> of protocols modeled like that is guilty of wrong thinking and
> should, for the improvement of their souls, just lose out" (I am
> deliberately exaggerating relative to anything that has been
> said in this thread, but not much relative to things that have
> been said in previous versions of this conversation).
> 
>     john
>  
> 
> 
> --On Wednesday, October 4, 2017 06:05 -0700 Joe Touch
> <touch@xxxxxxxxxxxxxx> wrote:
> 
>> On 9/29/2017 7:48 AM, Phillip Hallam-Baker wrote:
>>> How strange, I have always found FTP to be an absolute dog of
>>> a protocol precisely because it mandates idiot defaults and
>>> implementations are required to perform heuristic hacks. I
>>> always used to have to transmit files twice because the first
>>> time the transfer would be corrupted by the default being
>>> 'trash my data in case I am using an IBM machine'.
>>>
>>> The separation of the command and data is a good one but
>>> separating it over separate TCP/IP streams is an utter
>>> disaster for reliability. None of the FTPS clients I have
>>> found for Windows is able to reliably transmit a Web site
>>> update.
>> HTTP ignored this at their own peril, and ended up creating
>> its own multiplexing layer to avoid HOL blocking and support
>> signalling and reinventing parts of FTP poorly, IMO.
>>
>> Faults in the implementation of Windows software are far too
>> easy to find to blame them on the protocol itself.
> 
> And that is a specific, and IMO helpful, example of the point I
> was trying to make.   We would do well to look carefully at
> protocols like FTP (and even, e.g., Archie, Gopher and WAIS) to
> see what can be learned from them rather than saying "It, or
> implementations of it, won't do XYZ well, therefore it is
> hopelessly obsolete and should be abandoned or deprecated".
> Similarly, rather than saying what amounts to "ABC (typically
> 'the web') has won and everything else is irrelevant" --a claim
> I've even heard relative to lower-layer protocols not needed to
> directly support HTTP/HTTPS-- we should be asking whether there
> are problems, even niche problems, that can be better addressed
> in other ways.  
> 
> We might even end up with a richer and more diverse and useful
> Internet.
> 
>     john
> 
> 
> 
> 




[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]