> There's no way to solve the endianness issues, but using emulation to > handle missing instructions (be they floating point or ll/sc, or > what-have-you) solves the minor differences in the instruction set from > the library/application standpoint. If all MIPS processors used the same > instruction set then we wouldn't have the problem at all. Of course > there are very good reasons (and probably some silly ones too...) why > ISAs are tailored. > > The kernel is already going to have to adjust some anyway, so keeping the > differences in the kernel doesn't increase the testing burden. Throwing > the problem over to glibc (and the toolchain) does increase the number > of active configurations. And for the sake of beating a dead horse that perhaps only I can see, there is a philosophical choice that must sometimes be made whether to achieve binary portability by compiling to the lowest-common denominator, or by emulating the missing operations in the OS. The former is more efficient for downrev parts, the latter is more efficient for contemporary parts. Those of us who work on recent and future designs will always tend to favor the latter - what's the point of using MIPS32/MIPS64 and beyond CPUs if gnu/Linux binaries are going to treat them like R3000s? Kevin K.