Hi,
Ive been compiling a lot of kernels and trying some patches here and
there. Basically, I have a "safe" kernel and protected build-tree to
boot back to and then an "in-use" kernel/tree where I apply patches,
compile, then boot. (Both in /usr/src although I know they dont have to
be.) Pretty simple.
When one patch works, (i.e. the kernel boots with it) I go on trying the
next patch using the same kernel tree as the last one, i.e. the one that
I just booted from. This usually works, but sometimes the subsequent
kernel compiles fail in strange ways, like running out of loop devices.
I finally figured out that this must have something to do with the fact
that I am re-compiling in the same tree I built the running kernel in.
I script the whole build process and likely modules_install gets runs
even if there is a compile error. Then one of the newly copied in
modules has a different module version checksum even though the module
itself didnt change. (Ive seen refs saying this can happen.) Or the
module really did change. In either case I get a new module in
/lib/modules/2.4.20-inuse that the running system doesnt like. Thats my
theory, but its only happened a few times so not sure yet.
So I may have to work on my over-simple build script to not install
modules if there is any error. But I am also wondering how others
structure their work environment when doing iterative compile, install,
test, recompile cycles for the kernel and/or modules. Is there a
"standard" way to do this that I am missing? Something that avoids this
problem all together, yet supports the process and can be automated?
Thanks for any ideas,
Mark
--
Kernelnewbies: Help each other learn about the Linux kernel.
Archive: http://mail.nl.linux.org/kernelnewbies/
FAQ: http://kernelnewbies.org/faq/