On 07/13/10 10:26, David C. Rankin wrote:
Can anyone think of the possible mechanism that would cause a kernel to
boot once after rebuilding the initramfs, but then be corrupt for every
boot thereafter??
Do you rebuild the initramfs on 2.6.32?
Do you let the machine sit for a minute, shut-down, between each boot?
Yes, I can think of a mechanism. I'll tell it by example: My machine
has a built-in webcam that the OS has to upload firmware to on every
boot. Sometimes when I boot it ends up in a screwed-up state somehow
(so the webcam doesn't work), and sometimes rebooting doesn't help:
shutting down and waiting a few minutes sometimes helps: booting into
MacOSX then shutting down also can change things a bit, often for the
better (after all this hardware and OSX were made for each other). I
believe it has some sort of volatile memory that decays randomly and
slowly when not powered (like RAM does). I guess that when it boots
with its memory containing partly corrupted firmware, it causes some
kind of trouble depending on the exact state of the memory that
interferes with just fixing it by uploading new firmware.
That's an example of how something could possibly persist across
reboots. Maybe if you build on 2.6.32, the actual effect is that you
were just booted into a good kernel that initialized some piece of
hardware into some reasonable state, and this state is likely to persist
across a reboot, but 2.6.34 screws up the state such that the next boot
of 2.6.34 doesn't like it but 2.6.32 is a good enough kernel to
nevertheless re-initialize it properly. (It could be non-volatile
memory too, and the randomness could be part of linux boot process being
nondeterministic as it is)
Or...maybe the explanation is entirely different.
-Isaac