On Sun, Nov 29, 2009 at 10:13 AM, Alan Cox <alan@xxxxxxxxxxxxxxxxxxx> wrote: >> If decoding can *only* be sanely handled in user-space, that's one >> thing. If it can be handled in kernel, then that would be better. > > Why ? > > I can compute fast fourier transforms in the kernel but that doesn't make > it better than doing it in user space. Of course not. > I can write web servers in the kernel and the same applies. I'm not so young as to not recall Tux. That was again a bad idea, for the same reason. It introduced unnecessary complexity. Enabling userspace to be able to service web requests faster improved all user-space code. Yay. The question is which solution is more complex, the current one that requires userspace to be an active participant in the decoding, so that we can handle bare diodes hooked up to a sound-card, or having the kernel do decode for the sane devices and providing some fall-back for broken hardware. The former has the advantage of being flexible at the cost of increased fragility and surface area for security, and latency in responding to events, the latter has the problem of requiring two different decoding paths to be maintained, at least if you want to support odd-ball hardware. Jon is asking for an architecture discussion, y'know, with use cases. Maxim seems to be saying it's obvious that what we have today works fine. Except it doesn't appear that we have a consensus that everything is fine, nor an obvious winner for how to reduce the complexity here and keep the kernel in a happy, maintainable state for the long haul. Who knows, perhaps I misunderstood the dozens of messages up-thread -- wouldn't be the first time, in which case I'll shut up and let you get back to work. -- To unsubscribe from this list: send the line "unsubscribe linux-input" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html