On Thu, Nov 12, 2015 at 10:04 AM, Dave Crocker <dhc@xxxxxxxxxxxx> wrote: > There should be little comfort in seeing that the IETF is not the only > place that being personally abusive is the norm: > > Linux kernel dev Sarah Sharp quits, citing ‘brutal’ communications > style > > > http://www.networkworld.com/article/2988850/opensource-subnet/linux-kernel-dev-sarah-sharp-quits-citing-brutal-communications-style.html > > Direct assaults on specific individuals are the easy examples to see, > even if they do not cross over as an attack on an identified class. I think the IETF has issues in this area but I haven't seen behavior remotely like the rant about the IPv6 code someone attempted to commit to the kernel. There are pathologies in IETF interactions but I don't think this is a particularly good illustration except to the extent that it is an example of someone trying to do the right thing, getting a large amount of aggravation and giving up. In that particular case the author of the rant was complaining about this specific line in the IPv6 stack: if (overflow_usub(mtu, hlen + sizeof(struct frag_hdr), &mtu) || mtu <= 7) goto fail_toobig; Now without wanting to rehash the dispute here. The code corrects a real security problem in the original code and an architectural issue in IPv6 in general that is understood today but was not recognized in 1995. Specifically, if you have a protocol in which byte length is used to denote the length of nested structures, an inconsistency can arise when a sequence such as 8 { 3 { a } 4 { b c } is incorrectly presented as 8 {100000000 {a } 4 {b c} }. That is the heartbleed bug. What I see in the code there is someone making a perfectly reasonable engineering decision and then being bullied into submission by someone throwing their reputation about. All bugs may be shallow given enough eyes. But those of us who have studied the history of intelligence services know that there are countless examples of agencies having an abundance of eyes studying a subject whose advice and conclusions were ignored by a tiny number of individuals. IETF aggravation doesn't come in the form of 'This is S**t'. It comes in the form of 'you need input from Fred' where Fred will turn out to have no actionable advice. In particular, I think that we have on rather too many occasions failed to remember that the whole point of the Internet architecture is to see what could happen if we just let chaos run. The approach to architecture that is appropriate at layer 3 in the stack is not appropriate at layer 7. For the Internet to work it is essential that the routers, switches, etc. that make up the 'Inter-network' have a common understanding of what they are doing and perform their actions in ways that other parts of the infrastructure can predict. Caution is required, inappropriate changes could potentially bring down the whole stack of cards. At the application layer, the approach has rather too often been to tell people to 'be careful' and what they are doing it 'difficult' or 'tricky' without any explanation of what to be careful of how to avoid the difficulties that might be faced or even what they are. Thus ensuring that people attempting to address a problem in that area have the maximum discouragement and the minimum amount of useful assistance. There is a Reverse Dunning-Kruger effect where a person realizes that a problem is difficult and assumes that since they can't solve it, nobody else can either. They must therefore warn anyone who might attempt that problem of how difficult it is going to be and if possible discourage them from the attempt. Which is of course nonsense as if you look at the reason the Web works and other attempts to build network hypertext infrastructures failed is that Tim Berners-Lee decided to ignore the hard problems entirely. Referential transparency is a difficult problem, OK lets not bother then, show 404 not found. But! But! I have spent 20 years designing algorithms for efficient referential transparency! At the moment we have several points in the IETF process where supplicants are asked to jump through hoops to get a code point assigned for an application layer protocol. And one of the reasons we have those check points is that there are lots of people who seem to thing something really bad can happen if someone accidentally does the wrong thing. Hello. Why are we so bothered about people bringing down the Internet by accident when the folk who have been trying to do the same thing on purpose haven't managed it yet? Yes, I know they cause a lot of aggravation and annoyance but they aren't going to succeed. The only way someone could bring down the Internet with an application is if they produced one that was so awesome and useful that the three billion users all decided they had to get it. And if someone has an idea that powerful, denying them an SRV registration isn't going to stop them. I would like it if we could get back to the idea of the Internet being an enabling platform for people to try stuff out. I would like it if we could somehow get all those folk who are getting excited about the Internet of things had a way to quickly and easily get 1) single code point that assigned them a DNS SRV prefix, a HTTP .well-known prefix and a chunk of URN space and 2) a short guide with suggestions for how they might use it and avoid later regrets. So instead of having to fire off three separate applications for the Mesh, I would make a request for the codepoint 'MatheMaticalMesh' to be reserved to me. That would automatically give me authority to define documents specifying the interpretation of: _mathematicalmesh._wk.example.com SRV ... http://example.com/.well-known/MatheMaticalMesh/ urn:mathematicalmesh: My problem with IETF isn't that people will say 'this is sh*t' to my face. My problem is that when I make a request I can never really know if the reason it takes six months or two years to get a response is because everyone is really busy or because that is what people are saying behind my back.