Re: Request to Charter a New Working Group: Oblivious HTTP (OHTTP)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> Il 08/06/2021 21:16 Michael Richardson <mcr+ietf@xxxxxxxxxxxx> ha scritto:
> 
>  
> Eliot Lear <lear@xxxxxxx> wrote:
>     > Is this same service going to further harm clients by making it even more
>     > difficult to block known malicious web sites?  Not only would a local
>     > deployment not be able to do this, but proxies themselves wouldn't be able to
>     > spot malware.  Combine that with some rather impressive phishing capabilities
>     > of bad actors, and aren't we just hamstringing our ability to put down
>     > malware attacks?
> 
> Without taking a position on the fundamental questions you ask, my
> understanding is that the proxy would be run by an entity that had a lot of
> properties, and that wished to provide pseudonymous access to them.
> Candidates that I can think of would include: google, godaddy, azure, ec2,
> cloudflare, *.gov, *.gc.ca, wordpress hosting companies, ... and that the target
> properties would be configured with some trust in the proxy system.

If the same entity controls the proxy and the target, what's the actual gain in privacy?

Also, if this were to be the actual deployment model, we would again be building a huge centralization mechanism for the entire web. I am always baffled by how a community that often seems obsessed with potential censorship and surveillance by governments then actively adopts architectures that foster potential censorship and surveillance by private companies, or create new gatekeeping opportunities.

More precisely, the risk is that this model would foster the birth of something that I'd call a "RDN" - request delivery network. Let's suppose that, like for CDNs, we end up with very few global providers managing the proxy layer. What happens if one of them goes down? What happens if they agree among themselves, or are forced by external factors, to deny requests from or to specific addresses, networks, or countries? Also, why should someone provide such a service, if they don't find ways to monetize the data? Is this creating economic incentives to attack the very privacy that the new architecture is supposed to offer? Or to ask application developers for money? Or to make certain client applications faster than their competitors by refusing service to the others, or by providing downgraded service to some of them?

This may or may not happen, depending on how many proxies will actually be available, but given the degree of consolidation that we're seeing in similar services, I would expect this concern to be recognized and addressed from the start. I didn't see any such consideration in the draft or in the charter.

Also, I did not see any requirements taking into account that, after all, there are sources that need to be recognized (e.g. bots) and destinations to be blocked permanently (e.g. illegal content) or depending on who the user is (e.g. parental controls). This should also be addressed explicitly by any proposal.

-- 
Vittorio Bertola | Head of Policy & Innovation, Open-Xchange
vittorio.bertola@xxxxxxxxxxxxxxxx 
Office @ Via Treviso 12, 10144 Torino, Italy





[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux