On Mon, Oct 24, 2022 at 09:22:35AM -0400, Prarit Bhargava wrote: > On 10/20/22 03:19, Petr Mladek wrote: > > On Tue 2022-10-18 15:53:03, Prarit Bhargava wrote: > > > On 10/18/22 14:33, Luis Chamberlain wrote: > > > > On Sat, Oct 15, 2022 at 11:27:10AM +0200, Petr Pavlu wrote: > > > > > The patch does address a regression observed after commit 6e6de3dee51a > > > > > ("kernel/module.c: Only return -EEXIST for modules that have finished > > > > > loading"). I guess it can have a Fixes tag added to the patch. > > > > > > > > > > I think it is hard to split this patch into parts because the implemented > > > > > "optimization" is the fix. > > > > > > > > git describe --contains 6e6de3dee51a > > > > v5.3-rc1~38^2~6 > > > > > > > > I'm a bit torn about this situation. Reverting 6e6de3dee51a would be the > > > > right thing to do, but without it, it still leaves the issue reported > > > > by Prarit Bhargava. We need a way to resolve the issue on stable and > > > > then your optimizations can be applied on top. > > > > > > > > Prarit Bhargava, please review Petry's work and see if you can come up > > > > with a sensible way to address this for stable. > > > > > > I found the original thread surrounding these changes and I do not think > > > this solution should have been committed to the kernel. It is not a good > > > solution to the problem being solved and adds complexity where none is > > > needed IMO. > > > > > > Quoting from the original thread, > > > > > > > > > > > Motivation for this patch is to fix an issue observed on larger machines with > > > > many CPUs where it can take a significant amount of time during boot to run > > > > systemd-udev-trigger.service. An x86-64 system can have already intel_pstate > > > > active but as its CPUs can match also acpi_cpufreq and pcc_cpufreq, udev will > > > > attempt to load these modules too. The operation will eventually fail in the > > > > init function of a respective module where it gets recognized that another > > > > cpufreq driver is already loaded and -EEXIST is returned. However, one uevent > > > > is triggered for each CPU and so multiple loads of these modules will be > > > > present. The current code then processes all such loads individually and > > > > serializes them with the barrier in add_unformed_module(). > > > > > > > > > > The way to solve this is not in the module loading code, but in the udev > > > code by adding a new event or in the userspace which handles the loading > > > events. > > > > > > Option 1) > > > > > > Write/modify a udev rule to to use a flock userspace file lock to prevent > > > repeated loading. The problem with this is that it is still racy and still > > > consumes CPU time repeated load the ELF header and, depending on the system > > > (ie a large number of cpus) would still cause a boot delay. This would be > > > better than what we have and is worth looking at as a simple solution. I'd > > > like to see boot times with this change, and I'll try to come up with a > > > measurement on a large CPU system. > > > > This sounds like a ping-pong between projects where to put the > > complexity. Honestly, I think that it would be more reliable if > > we serialized/squashed parallel loads on the kernel side. > > > > Well, that's the world we live in. Module loading ping pongs between udev > and the kernel. You are missing the point. Think of stable first. Upgrading udev is not an option. Yes ou can think of optimizations later that udev can do, and should perhaps do, but that's beyond the scope of the fix needed here. kmod (the library which modprobe now uses) can / probably already has a lookup for modules prior to issuing a new request. But even then, we cannot assume all users user kmod (think android). Anything can request a new module and we should do what is sensible in-kernel. I'd like to see us think about stable first here. Luis