Re: Call for Community Feedback: Retiring IETF FTP Service

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 30.11.2020 um 17:21 schrieb Keith Moore:
On 11/30/20 10:52 AM, Roman Danyliw wrote:

If one visits,https://www.rfc-editor.org/rfc/rfc7230.txt, is a TXT not
returned?

How would you know?  If you visit a file from a browser, you only know
what the browser shows you.

Well, you could check with the web developer tools.

If for instance I click on that link and then hit ^U in the browser
(which happens to be Brave), what I now see appears to be text with line
numbers.   I can't tell if the formfeeds are there or not.

Firefox shows you the line feeds (as control char). So is your point
that there *can* be a user agent that behaves differently? How is that
materially different from different text editors doing things differently?

If I do the same in firefox, I get text without line numbers. The
formfeeds actually appear to be there - they show up as squares with
"000C" in them.

In chromium, I get the same behavior as Brave.

In the past, I've gotten other results, such as seeing HTML tags
embedded in what is supposedly the "source" of the plain text file.

The general problem is that when you use a web browser to view something
and then save or print it, the behavior is undefined. No standard says
what should happen, and you don't know what you're going to get.

<https://html.spec.whatwg.org/multipage/links.html#as-a-download> - but
I confess that I'm not going to read that algorithm :-)

I agree that Chrome does not *show* the FF character, but it *is*
present in the saved file.

But a related problem is that people keep gratuitously changing things.
So even if you find a workaround to some bit of damage, that workaround
is not assured to work in the future.

To be clear, we're just talking about damage caused by web browsers
here.   I doubt that anyone is clicking on a .txt and getting .html or
.pdf from the web server.

But this illustrates why some of us prefer to avoid using web browsers
for some things, and instead rely on tools that have well-defined behavior.

Also, I don't think the HTTP protocol corrupts files, though I have seen
software that would silently ignore an incomplete HTTP file transfer.

Which *should* be a thing of the past with HTTP/2 because of better
message framing.

But if what you actually need to do is browse files to pick out the ones
you want, vanilla HTTP doesn't provide what a tool needs to reliably do
that.

It does not?

Best regards, Julian





[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux