On Mon, Sep 26, 2022 at 09:32:44PM +0200, Uladzislau Rezki wrote: [...] > > > > On my KVM machine the boot time is affected: > > > > > > > > <snip> > > > > [ 2.273406] e1000 0000:00:03.0 eth0: Intel(R) PRO/1000 Network Connection > > > > [ 11.945283] e1000 0000:00:03.0 ens3: renamed from eth0 > > > > [ 22.165198] sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray > > > > [ 22.165206] cdrom: Uniform CD-ROM driver Revision: 3.20 > > > > [ 32.406981] sr 1:0:0:0: Attached scsi CD-ROM sr0 > > > > [ 104.115418] process '/usr/bin/fstype' started with executable stack > > > > [ 104.170142] EXT4-fs (sda1): mounted filesystem with ordered data mode. Quota mode: none. > > > > [ 104.340125] systemd[1]: systemd 241 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid) > > > > [ 104.340193] systemd[1]: Detected virtualization kvm. > > > > [ 104.340196] systemd[1]: Detected architecture x86-64. > > > > [ 104.359032] systemd[1]: Set hostname to <pc638>. > > > > [ 105.740109] random: crng init done > > > > [ 105.741267] systemd[1]: Reached target Remote File Systems. > > > > <snip> > > > > > > > > 2 - 11 and second delay is between 32 - 104. So there are still users which must > > > > be waiting for "RCU" in a sync way. > > > > > > I was wondering if you can compare boot logs and see which timestamp does the > > > slow down start from. That way, we can narrow down the callback. Also another > > > idea is, add "trace_event=rcu:rcu_callback,rcu:rcu_invoke_callback > > > ftrace_dump_on_oops" to the boot params, and then manually call > > > "tracing_off(); panic();" from the code at the first printk that seems off in > > > your comparison of good vs bad. For example, if "crng init done" timestamp is > > > off, put the "tracing_off(); panic();" there. Then grab the serial console > > > output to see what were the last callbacks that was queued/invoked. > > > > We do seem to be in need of some way to quickly and easily locate the > > callback that needed to be _flush() due to a wakeup. > > > <snip> > diff --git a/kernel/workqueue.c b/kernel/workqueue.c > index aeea9731ef80..fe1146d97f1a 100644 > --- a/kernel/workqueue.c > +++ b/kernel/workqueue.c > @@ -1771,7 +1771,7 @@ bool queue_rcu_work(struct workqueue_struct *wq, struct rcu_work *rwork) > > if (!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work))) { > rwork->wq = wq; > - call_rcu(&rwork->rcu, rcu_work_rcufn); > + call_rcu_flush(&rwork->rcu, rcu_work_rcufn); > return true; > } > > <snip> > > ? > > But it does not fully solve my boot-up issue. Will debug tomorrow further. Ah, but at least its progress, thanks. Could you send me a patch to include in the next revision with details of this? > > Might one more proactive approach be to use Coccinelle to locate such > > callback functions? We might not want -all- callbacks that do wakeups > > to use call_rcu_flush(), but knowing which are which should speed up > > slow-boot debugging by quite a bit. > > > > Or is there a better way to do this? > > > I am not sure what Coccinelle is. If we had something automated that measures > a boot time and if needed does some profiling it would be good. Otherwise it > is a manual debugging mainly, IMHO. Paul, What about using a default-off kernel CONFIG that splats on all lazy call_rcu() callbacks that do a wake up. We could use the trace hooks to do it in kernel I think. I can talk to Steve to get ideas on how to do that but I think it can be done purely from trace events (we might need a new trace_end_invoke_callback to fire after the callback is invoked). Thoughts? thanks, - Joel