Re: FTP and file transfers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry... hit "send" a bit too soon.  I meant to add that it
might be worth looking at some of those older protocols to see
whether, if their basic target functionality is still useful,
even if only for an identifiable niche, if known problems
(either in the original design or due to developments since they
were designed) can be fixed by new features.

Name-calling, e.g., "FTP is awful", doesn't help with improving
understanding.  I happen to find HTTP/HTTPS fairly awful when
all I need to do is transfer a file that I have no desire to
display or otherwise render in real time.  Some browsers
mitigate that particular awfulness somewhat by providing a
"download this URL" option, but, because the idea of
interpreting a location, finding an object, interpreting it, and
retrieving it for local interpretation and display seems
integral to the web design, that still doesn't do a good job of
addressing cases in which the desire is to precisely point to
something in a remote file system, retrieve it (optionally using
a network-defined canonical form), then then get it into the
local file system in appropriate form.    Those concepts are as
important to FTP --and native to it-- as the separate control
and data streams.  We could debate how often they are needed
today, but the need is clearly above zero.

Even with the two-stream model, I think it is important to
recognize what that does do, independent of aesthetic arguments
about separate or combined streams.  Later implementations and
even versions of the protocol notwithstanding, the two streams
were intended to be completely asynchronous, allowing not only
"passive" (including third host) transfers but checking status
or even changing or aborting transfers in progress without
tearing down the connection or leaving the host at one end
unsure whether the connection failed or the other one
deliberately aborted.  It allows starting (but still
controlling) several transfers from the same control connection.
That allows several things that we have never exploited or
included in the published protocol, but that doesn't mean the
possibilities are absent.  

Similarly, getting FTP to meet today's norms for authentication,
authorization, and privacy would not be straightforward.  Those
capabilities are certainly not part of the 1985 (or 1971)
specifications of the protocol.  But we certainly know enough to
be able to do those things and maybe even to exploit some
advantages of doing key exchange out of band wrt the data
stream.  IMO, it would be far more useful to have those
discussions rather than saying "FTP is awful" and, by
implication, "anyone who thinks they need or want the features
of protocols modeled like that is guilty of wrong thinking and
should, for the improvement of their souls, just lose out" (I am
deliberately exaggerating relative to anything that has been
said in this thread, but not much relative to things that have
been said in previous versions of this conversation).

    john
 


--On Wednesday, October 4, 2017 06:05 -0700 Joe Touch
<touch@xxxxxxxxxxxxxx> wrote:

> On 9/29/2017 7:48 AM, Phillip Hallam-Baker wrote:
>> How strange, I have always found FTP to be an absolute dog of
>> a protocol precisely because it mandates idiot defaults and
>> implementations are required to perform heuristic hacks. I
>> always used to have to transmit files twice because the first
>> time the transfer would be corrupted by the default being
>> 'trash my data in case I am using an IBM machine'.
>> 
>> The separation of the command and data is a good one but
>> separating it over separate TCP/IP streams is an utter
>> disaster for reliability. None of the FTPS clients I have
>> found for Windows is able to reliably transmit a Web site
>> update.
> 
> HTTP ignored this at their own peril, and ended up creating
> its own multiplexing layer to avoid HOL blocking and support
> signalling and reinventing parts of FTP poorly, IMO.
> 
> Faults in the implementation of Windows software are far too
> easy to find to blame them on the protocol itself.

And that is a specific, and IMO helpful, example of the point I
was trying to make.   We would do well to look carefully at
protocols like FTP (and even, e.g., Archie, Gopher and WAIS) to
see what can be learned from them rather than saying "It, or
implementations of it, won't do XYZ well, therefore it is
hopelessly obsolete and should be abandoned or deprecated".
Similarly, rather than saying what amounts to "ABC (typically
'the web') has won and everything else is irrelevant" --a claim
I've even heard relative to lower-layer protocols not needed to
directly support HTTP/HTTPS-- we should be asking whether there
are problems, even niche problems, that can be better addressed
in other ways.  

We might even end up with a richer and more diverse and useful
Internet.

    john







[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]