Re: FTP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In theory, networks can be virtualized. In practice, virtual networks are much harder to audit. So even though I have SDN in the home, I also have separate red and black networks for work and an air gapped yellow network for quarantine. It is really easy to see that the yellow network is still air gapped.

And when we get to corporate networking, it is very much the same. Every customer I have ever had has always wanted a model in which their network is separated from the Internet by a moat with clearly defined physical and logical access points.

People can assert that model is wrong but that is the model that the customers chose long ago and the burden of proof is on those trying to change it.

The thing with file access is that I would really like to have my Windows and Mac machines use a pure DNS and IP based mechanism for finding file servers. Instead they piddle about with broadcast discovery crap that really doesn't work for reasons I can't really debug easily. Sometimes the servers appear in explorer and finder, sometimes they don't. That is the problem with ad-hoc heuristic approaches.

I could spend a week learning how to do active directory and another two days a year on maintenance but that is a heck of an overhead just to get something that should just work to actually work.

And I really don't think anything is going to get better in the near term, not with everyone piling into 'AI' as the solution to every problem. What they really mean is 'let's find some even more complex heuristics to fix the broken design'.


I subscribe to Plex so I can watch my DVD collection. But all that is doing for me is providing a GUI front end to what is essentially a catalog and file retrieval protocol. 

My frustration here is that in my view a NAS should be a very simple appliance with a very limited amount of code that I slot disks into, push data into and pull it out when I need it. Instead I have this needy device that regularly sulks because Samba isn't configured right or the ethernet is plugged into the wrong port or its the wrong sort of Friday. The thing has millions of lines of code and is constantly demanding to install updates. And whenever it finds there is an update, it kills the Plex server.

Printers have become similarly unreliable. My neighbor has a perfectly good printer, he comes round to my house every so often because it can't print a PDF. As the saying goes, Rage Against The Machine were probably raging against a printer.


We are facing the problems caused by the accretion of multiple layers of complexity and we cannot fix those problems by adding yet more complexity. Throwing in LLM AI models that don't even have a semantic model of the system is like hiring a newly minted MBA working for B*** Consulting and expecting them to actually have important or useful things to say about the business you spent ten years building after five days on site.

There is a need for a file transfer protocol that is not a network file system protocol. FTP and NFS meet very different needs. FTP has rather worn out its welcome and we need to do better. HTTP provides the ability to move the raw bits. It can even provide media streaming capabilities through ranged requests. Where we are lacking is in the area of indexing.

FTP is limited to the UNIX/VMS notion of file paths for labelling files. But as we all know, taxonomies are a limited approach to organizing information. At some point we just copy all the clutter off the desktop into a folder labelled '2024' and move on. And the 2024 folder has a 2022 folder in it, and...



On Tue, Jul 9, 2024 at 7:05 PM Brian E Carpenter <brian.e.carpenter@xxxxxxxxx> wrote:
Keith,

On 10-Jul-24 10:40, Keith Moore wrote:
> On 7/9/24 18:29, Phillip Hallam-Baker wrote:
>
>> One of the weaknesses of the Internet architectural model as insisted
>> upon by many here is that insisting 'anything can talk to anything'
>> makes it really hard to secure file servers locking them to only be
>> visible to the local network.
>
> Why should "the local network" (as a collection of hardware) be
> meaningful at all?

It doesn't have to be hardware, because it could be virtualized,
and therefore it doesn't have to be physically localized.

What it does have to be is securely identified, including the
boundary. There are some requirements for that:
https://www.rfc-editor.org/rfc/rfc8799.html#name-functional-requirements-of-

There are people working on such things. For example:
https://medium.com/dfinity/secure-scalability-the-internet-computers-peer-to-peer-layer-6662d451f2cc

    Brian



[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux