Re: lvm2 raid volumes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 08/03/2016 12:49 AM, Steve Dainard wrote:
Hello,

What are the methods for checking/monitoring a RAID LV?

Hi Stev,

see dmeventd (device-mapper monitoring daemon) and read lvm.conf WRT raid_fault_policy.

dmeventd provides warn or allocate mode allowing to either just warn about a RAID DataLV
or MetaLV failure or actively repair such failures. You'll find related messages in the system log.


The Cpy%Sync field seems promising here:

# lvs
  LV    VG           Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  raid1 test         rwi-aor--- 100.00m                                    100.00          
  raid6 test         rwi-aor--- 108.00m                                    100.00         

The Cyp%Sync field tells you about the resynchronization progress, i.e. the initial mirroring of
all data blocks in a raid1/10 or the initial calculation and storing of parity blocks in raid4/5/6.

It should display a percentage value as in:

# lvs -o+devices iscsi
  LV   VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                                                           
  r    iscsi rwi-a-r--- 4.00t                                    0.03             r_rimage_0(0),r_rimage_1(0),r_rimage_2(0),r_rimage_3(0),r_rimage_4(0),r_rimage_5(0)

Do you have a clean installation?
Try reinstalling lvm2 and device-mapper\*


# pvs
  PV         VG           Fmt  Attr PSize    PFree  
  /dev/vdb   test         lvm2 a--  1020.00m 876.00m
  /dev/vdc   test         lvm2 a--  1020.00m 876.00m
  /dev/vdd   test         lvm2 a--  1020.00m 980.00m
  /dev/vde   test         lvm2 a--  1020.00m 980.00m
  /dev/vdf   test         lvm2 a--  1020.00m 980.00m

But testing in a VM by removing a disk does not change the output of lvs:

# pvs
  WARNING: Device for PV S5xFZ7-mLaH-GNQP-ujWh-Zbkt-Ww3u-J0aKUJ not found or rejected by a filter.
  PV             VG           Fmt  Attr PSize    PFree  
  /dev/vdb       test         lvm2 a--  1020.00m 876.00m
  /dev/vdc       test         lvm2 a--  1020.00m 876.00m
  /dev/vdd       test         lvm2 a--  1020.00m 980.00m
  /dev/vde       test         lvm2 a--  1020.00m 980.00m
  unknown device test         lvm2 a-m  1020.00m 980.00m

# lvs
  WARNING: Device for PV S5xFZ7-mLaH-GNQP-ujWh-Zbkt-Ww3u-J0aKUJ not found or rejected by a filter.
  LV    VG           Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  raid1 test         rwi-aor--- 100.00m                                    100.00          
  raid6 test         rwi-aor-p- 108.00m                                    100.00          


My end goal is to write a nagios check to monitor for disk failures.

You may want to start with the Nagios checkvolmanager plugin...

Heinz


Thanks,
Steve


_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux