Re: FTP and file transfers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Joe Touch wrote:
> On 9/29/2017 7:48 AM, Phillip Hallam-Baker wrote:
>> How strange, I have always found FTP to be an absolute dog of a
>> protocol precisely because it mandates idiot defaults and
>> implementations are required to perform heuristic hacks. I always used
>> to have to transmit files twice because the first time the transfer
>> would be corrupted by the default being 'trash my data in case I am
>> using an IBM machine'.
>>
>> The separation of the command and data is a good one but separating it
>> over separate TCP/IP streams is an utter disaster for reliability.
>> None of the FTPS clients I have found for Windows is able to reliably
>> transmit a Web site update.
> 
> HTTP ignored this at their own peril, and ended up creating its own
> multiplexing layer to avoid HOL blocking and support signalling and
> reinventing parts of FTP poorly, IMO.

there are some design philosophy problems here.  With tcp, you either
have inline stream signaling with inline multiplexing, or else you can
have out-of-band/multiband signaling, which will generally mandate
codification of endpoint identifiers inside the signaling stream.
Neither system is good.

QUIC sets out to solve both of these problems by moving away from TCP,
which is an approach worth learning from, but not without its problems.

> Faults in the implementation of Windows software are far too easy to
> find to blame them on the protocol itself.

I'm with Phillip on this one: the ftp protocol is awful, from the points
of view of design, implementation and operations.

Nick




[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]