On Tue, Oct 27, 2020 at 02:52:01PM +0000, Salz, Rich wrote: > > > So... should the protcol spec have a requirement stating that implementations > MUST ensure this can not happen, and - oh, go figure out how to do that, not a > protocol issue ? > > I am not sure what you are trying to say. That it's hard to determine where the fault is sometimes? I don't think anyone disagrees with that. I have seen in the past and still a lot of resistance in standards track work to go beyond a mathematical proveable change of packets on a physical long enough wire. In discussions with past ADs, this has even gone as far as examples of "protocols" between two (possibly different vendor sourced) software components within a single box as being something not appropriately called standards protocol work for the IETF. Not sure if you remember the history of not allowing standardization of APIs, and only fairly recently having seen that changing. So, i am concerned about dogmatic restrictions on what can and can not be called "protocol" wrt. vulnerabilities, and hence i would strongly suggest not to use it in the name. > I worry about something like "protocol-vulnerabilities@xxxxxxxx" becoming swamped with implementation issues, but I would support this if we agreed it was a two-year experiment or something. Too much success ? We are not paying money, so why the fear ? Any similar problems in other places ? But of course: How could we ever start something like this (that we are unfamiliar with) without calling it experimental. Same goes also for what is already proposed by Roman. > > In patents, patent protection is only granted when the description is > sufficient to build a working model. So if you want to claim that a protocol > is not at fault for an attack, its description needs to be sufficient to > make it clear how to build a working model protecting against the attack. > > Patents (at least in the US) typically have an "escape clause" near the beginning, often written like "As will be readily obvious to one familiar with the field" So I see the same parallel to standards: avoiding memory exhaustion under load should be readily obvious to one familiar with the field. Except those who develop product. IMHO my one example is an ongoingly unsolved problem: how to dynamically manage limited resources in routers amongst different usrs of such resources. There are no tools to predict memory utilization for routers, so you can not even build a simulation of a network and validate upfront that it will run without memory problems. Just had a nice pondering about millions of dollars of outages regularily being fined for network failures where live&death services are running over (911). Try to figure out what type of network and operational design you need to proactively avoid such fines in the future (and the associated loss of life of such services failing). And that is in the absence of evil attackers, think about how much harder this becomes when attackers are present. Everything is easy when worst case a service can just fail, and the worst outcome is that people have to read a book instead of streaming a movie. Cheers Toerless > -- --- tte@xxxxxxxxx