I've mentioned timing exactness. Furthermore I don't think having context switches when you can do without will do anything for performance. So my terminology. I get shorter code paths, I get a minimal performance benefit, I save a lot on memory copies, and my timing exactness / latency improves as well. Since I do get benefits on all fronts, there is no mixup in terminology· As long as the application can be held simple enough, an in-kernel approach works. At least that is my conclusion. > > If there weren't performance and overhead differences > between a > protected memory approach and doing it in > > kernel space, why are nearly all realtime OS > > without that distinction? > > In my humble opinion this is because they're too simplistic. Being simple is an advantage if what you want is determinism. Optimizing for short codepaths and minimal latency may seem simplistic, but all the throughput optimizations found in Linux and the more complex coding make it hard to use it in embedded realtime applications. With kind regards, Oliver Korpilla -- Highspeed-Freiheit. Bei GMX supergünstig, z.B. GMX DSL_Cityflat, DSL-Flatrate für nur 4,99 Euro/Monat* http://www.gmx.net/de/go/dsl -- Kernelnewbies: Help each other learn about the Linux kernel. Archive: http://mail.nl.linux.org/kernelnewbies/ FAQ: http://kernelnewbies.org/faq/