Audioslave - 7M3 - Live wrote: > > Martin Stricker wrote: > > What do you call "optimization"? -mach or -mcpu? Typo, sorry! I of course mean -march... > I was thinking in terms of the low end processor. I am getting that > -mach takes into consideration the low end instructions. I am getting > thst -mcpu refers to maximum machine performance, using the older > instruction sets. This should probably be set to P4 classification, > as suggested earlier, by another lister. -march optimizes for the specified processor while staying compatible with lower architectures. -mcpu optimizes for the specified processor. Both lower and higher processors might not be able to execute it. You greatly overestimate the impact of the newer instruction sets on your regular applications. The MMX instructions for example were introduced for 3D computer games. If you have any application dealing with 3D vector graphics they will benefit from MMX, as will programs that do certain vector-based computing. All other programs will get *absolutely no* gain from the MMX instructions! An Pentium 166 MMX is about 10% faster than Pentium 166 (without MMX), but that's *not* because of MMX, but solely because the Pentium MMX has double first-level cache (RAM that runs at processor speed and is included on the processor chip). The same goes for most other instruction sets - except a few certain applications they bring no gain for you. > > Most of the newly introduced instructions are useful only for > > special purposes. So in most cases it not only doesn't make any > > sense to use these new instructions, it is even bad! Furthermore, > > most of the really low-level functionality is provided by the > > kernel and glibc, and both are available for several processor > > types. Nearly all regular applications will gain *nothing* from > > using the new instruction sets. > > Red Hat does the Right Thing (TM). > > > > > > Being that both are compiled by gcc, it sounds like they have to > depend upon a very fine tuned compiler to benefit. If the programs > put the tasks off to the kernel and the libraries, then they should > be "speaking the same language". This is my thought, not based on > research. They all do. They all talk "i386". The newer processors just learned a few new words for very specific purposes. Like you will learn new words if you become, say, a gardener. For talking about gardening you will need these new words, but you will never use them (at least ideally) while talking about any other topic. To get back to processors, if you don't do vector graphics, you have *absolutely no* use for the MMX instructions. Similar is true for the other new instruction sets. > I was thinking that someone stated that the processor had to shift > into a certain state to deal with MMX. This led me to believe that > the processor had to shift into different modes for each set of > instructions. Not that I'm aware of, but I'm not an expert. > I've seen the binaries crash before. MMX arch was one time it seemed > more prevailant. But in earlier days, the core dump was a very common > occurence. (RHL 5.2 era) Core dumps are usually caused by badly written C/C++ programs doing something bad with a pointer. For security reasons it is then t(h)rashed by the kernel who dumps the memory core of that program for debugging purposes. A program with invalid instructions just won't run (grab a binary compiled, say, for Linux on a RISC processor, it just won't run, maybe give an error message because the kernel tries to protect the system). Best regards, Martin Stricker -- Homepage: http://www.martin-stricker.de/ Linux Migration Project: http://www.linux-migration.org/ Red Hat Linux 8.0 for low memory: http://www.rule-project.org/ Registered Linux user #210635: http://counter.li.org/ -- Shrike-list mailing list Shrike-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/shrike-list