On Wed, Jun 13, 2018 at 05:09:11PM -0700, Kevin Hilman wrote: > Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> writes: > > > On Tue, Jun 12, 2018 at 03:08:12PM -0700, kernelci.org bot wrote: > >> stable-rc/linux-3.18.y boot: 52 boots: 28 failed, 18 passed with 1 offline, 5 conflicts (v3.18.112-22-gb0582263e3c9) > >> > >> Full Boot Summary: https://kernelci.org/boot/all/job/stable-rc/branch/linux-3.18.y/kernel/v3.18.112-22-gb0582263e3c9/ > >> Full Build Summary: https://kernelci.org/build/stable-rc/branch/linux-3.18.y/kernel/v3.18.112-22-gb0582263e3c9/ > >> > >> Tree: stable-rc > >> Branch: linux-3.18.y > >> Git Describe: v3.18.112-22-gb0582263e3c9 > >> Git Commit: b0582263e3c9810fd887ca92d19cb9ff30a4d9f6 > >> Git URL: http://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git > >> Tested: 24 unique boards, 12 SoC families, 13 builds out of 183 > > [...] > > > > > That is a lot of new failures, did the whole lab fail, or is this really > > a problem in v3.18.112 here? > > Whole lab failure (more precisely, lab operator failure) ;) > > gak, I updated the rootfs images to the latest buildroot, which forced > me to upgrade the kernel headers used to build the rootfs from v3.10 to > v4.4. So I guess it's no surprise that every single board panic'd as > soon as it hit userspace. > I build my root file systems with buildroot, and had a similar problem. My fix was to patch buildroot to let me use older linux headers. Guenter > I downgraded the rootfs, and re-ran all those boot tests, and now things > are 100% passing in my lab. > > Sorry for the noise, > > Kevin