What does the cpu time look like when you are seeing this issue? I have seen large numbers of scsi devices coming in and multipaths getting built cause the system to seem to waste time. With a high numbers of udev_children (I believe the default is pretty high) it can use excessive cpu on a big machine with a lot of paths and appears to be interfering with itself. In testing I was involved in it was found that setting udev_children to 4 produced consistent fast behavior, whereas having it set to default (lots of threads on large machines, exact number varies on machine size/distribution/udev version) sometimes producing systemd timeouts when paths were brought in. (>90seconds find PV for required LV's). The give away was udev accumulated 50-90 minutes of cpu time in a couple of minutes of boot up with default udev_children, but with it set to only 4 the paths processed faster and the machine booted up faster and udev did the same real work faster with much less cputime 2-3 minutes of cpu time. this is the option: /usr/lib/systemd/systemd-udevd --children-max=4. On Tue, Jul 19, 2022 at 7:33 AM Wu Guanghao <wuguanghao3@xxxxxxxxxx> wrote: > > The system has 1K multipath devices, each device has 16 paths. > Execute multipathd add/multipathd remove or uev_add_path/ > uev_remove_path to add/remove paths, which takes over 20s. > What's more, the second checkloop may be execed immediately > after finishing first checkloop. It's too long. > > We found that time was mostly spent waiting for locks. > > checkerloop(){ > ... > lock(&vecs->lock); > vector_foreach_slot (vecs->pathvec, pp, i) { > rc = check_path(...); // Too many paths, it takes a long time > ... > } > lock_cleanup_pop(vecs->lock); > ... > } > > Can the range of vecs->lock locks be adjusted to reduce the time consuming > when adding/removing paths? > > -- > dm-devel mailing list > dm-devel@xxxxxxxxxx > https://listman.redhat.com/mailman/listinfo/dm-devel > -- dm-devel mailing list dm-devel@xxxxxxxxxx https://listman.redhat.com/mailman/listinfo/dm-devel