Re: [QUESTION]mdadm4.1 upgrade to mdadm4.2,mdmonitor services failed to start if no raid in environment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 11 May 2023 15:56:31 +0800
miaoguanqin <miaoguanqin@xxxxxxxxxx> wrote:

> Hi
>   Here we meet a question. After we upgrade mdadm from 4.1 to 4.2, we 
> execute :
>      systemctl start mdmonitor
> the mdmonitor service failed to start when no raid device in 
> environment. error message are as follows:
> 
> mdmonitor.service - MD array monitor
>       Loaded: loaded (/usr/lib/systemd/system/mdmonitor.service; 
> enabled; vendor preset: enabled)
>       Active: failed (Result: protocol) since Thu 2023-05-11 10:52:32 
> CST; 5s ago
>      Process: 999741 ExecStartPre=mkdir -p /run/mdadm (code=exited, 
> status=0/SUCCESS)
>      Process: 999743 ExecStart=/sbin/mdadm --monitor
> $MDADM_MONITOR_ARGS -f --pid-file=/run/mdadm/mdadm.pid (c>
> 
> May 11 10:52:32 localhost.localdomain systemd[1]: Starting MD array 
> monitor...
> May 11 10:52:32 localhost.localdomain systemd[1]: mdmonitor.service: 
> Can't open PID file /run/mdadm/mdadm.pid>
> May 11 10:52:32 localhost.localdomain systemd[1]: mdmonitor.service: 
> Failed with result 'protocol'.
> May 11 10:52:32 localhost.localdomain systemd[1]: Failed to start MD 
> array monitor.
> 
> In the mdmonitor service file, type is set to forking and the PIDFILE 
> field is set. The systemd detection process is as follows:
> (1)when the parent process exits, a signal is sent to systemd
> (2)systemd wakes up and checks whether pidfile exists by PIDFILE
> field. (3)If the pidfile file does not exist, the service status is
> set to fail. In function Monitor code logic, after the parent process
> creates a pidfile, before systemd detects the pidfile, the pidfile is
> deleted from the child process. As a result, the systemd cannot
> detect the pidfile and sets the service status to fail.
> 
> It is a problem for user, because the mdmonitor service status is
> fail. If there is no RAID device in the environment, We want the
> service status is expected to be inactive after the service is
> started. Can you have any advice for this problem ?

Hi,

I do not know if the service should be in this state for such a case,
but the start of the mdmonitor service is also forced by udev rule
udev-md-raid-arrays.rules

# ENV{MD_LEVEL}=="raid[1-9]*", ENV{SYSTEMD_WANTS}+="mdmonitor.service"

so if the service is failed, it will be started anyway after creating
a new RAID.

Regards,
Blazej



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux