Re: FTP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi John,

Normally, if I want to share a file with someone, I would encrypt it (if necessary) and share it via an online cloud storage provider or scp if in a Linux environment.  I can’t see how this is harder than setting up an FTP server and ensuring that the recipient can find an ftp client, or how FTP could be deemed to be the most secure way of achieving this.

I still believe that for most users on the Internet, for the vast majority of cases, FTP is no longer the best answer for sharing files.  This is why I believe that IETF making it historic would arguably be the right thing to do.  I think that I probably stopped using it about 10+ years ago and haven’t missed it.

 

I do agree to the point about sharing files over various HTTP based services being bespoke.  Perhaps someone should plan a BOF to do an FTP++ (whatever that looks like) to try and bring file sharing in to the 21st century and have a common solution?

 

Regards,
Rob

 

 

From: John C Klensin <john-ietf@xxxxxxx>
Date: Thursday, 4 July 2024 at 23:00
To: Keith Moore <moore@xxxxxxxxxxxxxxxxxxxx>, Dave Cridland <dave@xxxxxxxxxxxx>
Cc: Phillip Hallam-Baker <phill@xxxxxxxxxxxxxxx>, ietf@xxxxxxxx Discussion <ietf@xxxxxxxx>
Subject: Re: FTP



--On Thursday, July 4, 2024 05:46 -0400 Keith Moore
<moore@xxxxxxxxxxxxxxxxxxxx> wrote:

> On 7/4/24 05:41, Dave Cridland wrote:
>
>>     Problem is, no widely applicable replacement for FTP ever
>>     emerged.   scp is probably the closest but still lacking in
>>     some ways.   I could see deprecating FTP because there
>>     aren't that many systems any more that require its very
>>     baroque approach to file representation, and also because of
>>     lack of good authentication.  What never made sense to me is
>>     not supporting any kind of widely applicable file transfer
>>     protocol standard.

>> I don't dispute that it's not a great situation.
>
> I really regard this as a failure on IETF's part, in that much of
> IETF seems to think nowdays that the Internet is just a way for
> mostly proprietary applications to exchange information with other
> instances of those proprietary applications.  The ARPAnet and
> early Internet has a core set of widely implemented, and generally
> platform-independent applications, and that played a large part in
> making the Internet great.

Another part of this story is that, in some of the places I hang out,
FTP is still in very heavy use where files need to be made available
to only a handful of parties (perhaps only one other) for at multiple
perceived reasons.  Lead among them.

It is relatively easy to set up and run at least a minimal FTP
server.  If the content of files being shared are sensitive, having
those files encrypted before being made available to the server is
perceived by those who have taken the FTP path as safer than having
them encrypted only in transit.   By contrast, there is a widely held
perception that setting up and operating web services has become
difficult and complex enough that more organizations are better off
contracting those out.   The latter can quickly lead to questions
about whether one trusts the contractor (usually a cloud provider)
and their staff.  Those questions are different from whether they do
a good enough job of following IETF protocols and recommendations
about protecting data in transit and/or information about who is
accessing what sites and data repositories.

If Keith wanted to get a file from me that was too large to easily
transmit as an email attachment, and and I wanted to let him, FTP
would still be the easiest --and, by many definitions, the most
secure-- way to do that.  It would require that we not care very much
whether someone with access to sniff our ISPs or the backbone could
find out that we talk with each other, or even how we name our files,
but they certainly would not learn anything profound. 

If one needs the latter, it may drive data providers in certain
directions.  If not, it looks like a lot of unnecessary complexity,
things that are hard for non-experts to understand, and things that
might go wrong.  A (at least implicit) design maxim for the early
ARPANET and Internet, especially at the applications layer, was that
simplicity was good.  It led to more implementations, more
interoperability, and fewer bugs, bugs that could create their own
risks and problems.  I remember criticisms of early FTP that it was
too complex and that one important early implementation did not
support many of its features and options.  Arguably, HTTP 1.0 and 1.1
were less complex than FTP (and built on the same general model).
Today, if complexity is a concern and we look at HTTPS in practice...
much worse.

I'm not sure whether, like Keith, I see the current state of things
as the IETF's fault or, e.g., the IETF as, like many other things, a
victim of a series of other forces.  But unless the conclusion is
that the IETF is making itself irrelevant, I'm not sure how much
difference it makes.

  john



[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux