Re: Why built-in modules slow down kernel boot?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Sep 30, 2014 at 12:27:12PM +0200, Michele Curti wrote:
> Hi all,
> it's just a curiosity. 
> 
> Since the use of an initramfs doubles the kernel boot time I decided to play a
> little compiling as built-in the modules required to mount root (starting 
> from a localmodconfig).
> 
> Everything ok, the system starts and the kernel boot time is good 
> 	Startup finished in 1.749s (firmware) + 375ms (loader) + 
> 	1.402s (kernel) + 716ms (userspace) = 4.244s
> (from systemd-analyze). 
> 
> My next idea was: "Well, why not to make all modules as built-in? So I avoid 
> reading from disk at every module load.. and all of them are loaded 
> anyway", but the results was opposite to my expectations, kernel boot time 
> increased from 1.4 to 3 seconds. 
> 
> So my question is, how this can be explained? 
> 
> My theory is that by compiling all the modules as built-in, the kernel calls
> all the module __init functions in a sequential manner, (using a single 
> core?) and lets the userspace start only when everything is done.

Yes, that is correct.  And some of those init functions do lots of "odd"
things, thinking that the hardware for those drivers really is present,
so they can take a while to figure out that they shouldn't be running at
all.

Also, a larger kernel takes longer to read off of the disk and load into
memory, although with a ssd, it shouldn't be noticable.

greg k-h

_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@xxxxxxxxxxxxxxxxx
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies




[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux