Re: FTP as an interesting privacy example (was: Re: FTP Service Discontinuance Under Consideration; Input Requested)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On 06/04/15 18:45, Ned Freed wrote:
> >> My point is only that if we want to debate the appropriate mechanisms
> >> to put in place to protect the privacy of access to public IETF
> >> information, then let's not do that based on the FTP corner case, but
> >> by considering the general question.
> >
> > And I quite simply disagree with this approach. I think FTP provides an
> > interesting test case and context under which to consider the more
> > general question.

> Really? I honestly don't get why FTP is at all "interesting" from
> the privacy of access POV. Can you explain?

It's interesting precisely because it's one of the services we use to provide
access to our content and it's one that is intrisicly hostile to privacy.

Even more interesting is how its presence cuts both ways: As long as we have FTP
access, we cannot claim to have secure-only access (which makes some people
happy and others unhappy). But at the same time this can be used as an argument
justifying tightening up or even eliminating non-secure access via other
protocols.

> In my head, how to appropriately setup privacy friendly defaults for
> http is much more interesting (and by "appropriate" I do include maybe
> keeping some form of cleartext access, perhaps no longer as default,
> but that'd have to be figured out).

Having spent not-inconsiderable time on implementing scripts for performing
automatic operations on various web site content, my assessment is that this
general area is a complete mess that includes conspicuous shortcomings in
protocols, best practice recommendations, implementations, libraries,
distributions, and deployments. As a bonus, it even drags in certificate
validation, revokation, and expiration issues.

Of course this doesn't mean that the IETF has to deal with the entire mess as
part of its web site policies. But just to provide an example of how deep even
the IETF's part of the rabbit hole goes: I use a stylesheet that sucks in the
RFC index in XML from the RFC Editor via http (not https), and uses it to
correctly present the statuses of various RFCs in our generated documentation.

Now suppose the RFC Editor were to switch to https-only access. Or suppose we
were to extend this to handle references to Interent Drafts. Whatever. The
issues this very simple application would face include:

(1) Does the application software even include https support as an option?
(2) Does the packaged version we're using have that support compiled in? If
    not, are we able to rebuild it or find an alternative?
(3) Does the change create issues for proxies or proxy configuration?
(4) Are there SSL/TLS compatibility issues?
(5) What happens if the IETF screws up their certificate handling? Can we
    turn off cerificate validation?

And this is a very straightforward case. Things get even messier if you're
writing applications that access web data using libraries in PHP, Perl, Python,
etc. If you think all of this stuff just works seamlessly and effortlessly, you
need get out more.

I note in passing that I have on a couple of occasions found FTP to actually be
the better alternative to HTTP for this sort of thing. (But admittedly never
when proxies are involved.) And so we come full circle.

One way to look at all this is that the IETF and friends have for better or
worse create a lot of what people have come to regard as public, stable
identifiers for accessing various resources. We mess with those at our peril.
Perhaps including those that begin with "ftp:".

				Ned





[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]