On Mon, Jun 12, 2017 at 12:24 AM, Alan Cox <alan@llwyncelyn.cymru> wrote: > Moving to large code isn't a problem, but large data means far pointers > and that means a huge slowdown in performance. Ah, that hadn't occurred to me, but it makes sense. Thanks! > For userspace large models also mean you can't do swapping until you have > 286 protected mode. I assume the issues are that you can't page segments in on demand because there's no MMU, so whenever you schedule the process you'd have to page all of it in, and the process now has physical memory addresses (rather than just addresses relative to some registers that the kernel can change), so you always have to page it to the same location in memory? I presume that this is why the MS/PC-DOS DOSSHELL (and no doubt there were various other tools that did the same) was a "task swapper" which swapped your entire process to/from disk when you switched processes? To support larger programs, would it be worth sacrificing the ability to swap those particular programs? And regardless of moving away from tiny model, I gather that moving to OpenWatcom just for the optimisations and ANSI C would potentially be worthwhile? > It's less likely one would be scribbled on, but in fact if you give a > 16bit process 64K code and 64K data it has to corrupt segment registers > to scribble outside of its space and that makes it more reliable than you > might expect. Oh of course! Thanks! David -- To unsubscribe from this list: send the line "unsubscribe linux-8086" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html