On Mon, 2010-02-01 at 16:42 -0500, David Daney wrote: > Guenter Roeck wrote: > [...] > > > > +static inline void cpu_set_vmbits(struct cpuinfo_mips *c) > > +{ > > + if (cpu_has_64bits) { > > + unsigned long zbits; > > + > > + asm volatile(".set mips64\n" > > + "and %0, 0\n" > > + "dsubu %0, 1\n" > > + "dmtc0 %0, $10, 0\n" > > + "dmfc0 %0, $10, 0\n" > > + "dsll %0, %0, 2\n" > > + "dsra %0, %0, 2\n" > > + "dclz %0, %0\n" > > + ".set mips0\n" > > + : "=r" (zbits)); > > + c->vmbits = 64 - zbits; > > + } else > > + c->vmbits = 32; > > +} > > + > > It should be possible to express this in 'pure' C using > read_c0_entryhi()/write_c0_entryhi(), also you need to be sure you are Sure, no problem. > not writing 1s to any reserved bits of the register. > That may be tricky, since the upper bits are reserved in some architectures. For example, the 20Kc core specification says that bits 61:40 are reserved and must be written with 0. I can write, say, 0x3fffffffffff0000 to avoid writing into lower reserved bits, but that won't help for any upper reserved bits. Would that be acceptable / better ? Guenter