Re: [Question] multipathd add/remove paths takes a long time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




在 2022/7/20 0:55, Roger Heflin 写道:
> What does the cpu time look like when you are seeing this issue?
> 
> I have seen large numbers of scsi devices coming in and multipaths
> getting built cause the system to seem to waste time.   With a high
> numbers of udev_children (I believe the default is pretty high) it can
> use excessive cpu on a big machine with a lot of paths and appears to
> be interfering with itself.
> Our problem may be a little different. A large number of multipath devices have been created.
We only add a multipath device,so there shouldn't be a lot of udev events.

> In testing I was involved in it was found that setting udev_children
> to 4 produced consistent fast behavior, whereas having it set to
> default (lots of threads on large machines, exact number varies on
> machine size/distribution/udev version) sometimes producing systemd
> timeouts when paths were brought in. (>90seconds find PV for required
> LV's).
> 
> The give away was udev accumulated 50-90 minutes of cpu time in a
> couple of minutes of boot up with default udev_children, but with it
> set to only 4 the paths processed faster and the machine booted up
> faster and udev did the same real work faster with much less cputime
> 2-3 minutes of cpu time.
> 
> this is the option:
> /usr/lib/systemd/systemd-udevd --children-max=4.
>Modified as you suggested, but it doesn't work very well.

> On Tue, Jul 19, 2022 at 7:33 AM Wu Guanghao <wuguanghao3@xxxxxxxxxx> wrote:
>>
>> The system has 1K multipath devices, each device has 16 paths.
>> Execute multipathd add/multipathd remove or uev_add_path/
>> uev_remove_path to add/remove paths, which takes over 20s.
>> What's more, the second checkloop may be execed immediately
>> after finishing first checkloop. It's too long.
>>
>> We found that time was mostly spent waiting for locks.
>>
>> checkerloop(){
>>         ...
>>         lock(&vecs->lock);
>>         vector_foreach_slot (vecs->pathvec, pp, i) {
>>                 rc = check_path(...); // Too many paths, it takes a long time
>>                 ...
>>         }
>>         lock_cleanup_pop(vecs->lock);
>>         ...
>> }
>>
>> Can the range of vecs->lock locks be adjusted to reduce the time consuming
>> when adding/removing paths?
>>
>> --
>> dm-devel mailing list
>> dm-devel@xxxxxxxxxx
>> https://listman.redhat.com/mailman/listinfo/dm-devel
>>
> .
> 
In our test environment, it takes over 40s for checkerloop() to check all paths,
vecs->lock will not be released during this time.So if we execute commands to
add/remove paths during this time, we may have to wait more than 40s at most
to get vecs->lock.

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/dm-devel




[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux