On 26/11/2018 21:59, Joe Touch wrote:
Fundamentally it is about economics. Doing complex processing in the fast path is expensive. It is expensive to have the cycles available. In designs that use fixed logic assistance it is pretty much infeasible to cope with multiple options as the combinatorial complexity explodes on you. As the packet size increases this is also expensive in terms of the fast packet caching memory needed. In the days when the fast path/slow path decision was first taken, there were also fundamental limits on what could be implemented at any price thus pushing the cost from equipment to physical infrastructure. There are some use cases that say that we need to get 1Tb/s to the host for some applications, so we cannot be sure that the fundamental parsing limit problem will not re-occur. In any case, fast feature = silicon = cost and whilst you can or at least could ride More's law, at the end of the day you have to trade features against port density against power against capex against opex. ... and all this is without the issue of the internet carbon footprint coming under scrutiny. So what this came down to at the time the feature split was taken was a decision on economics, i.e. is the cost of the additional feature worth the price paid? That is still true and I think that we need to remember that as we add non-optional features into the fast path. - Stewart |