issue about return value in _lvchange_activate_single

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello List, David & Zdenek,

I met a boring problem when executing lvchange cmd.

In lvm tools/errors.h, there are defines return values:
```
#define ECMD_PROCESSED      1
#define ENO_SUCH_CMD        2
#define EINVALID_CMD_LINE   3
#define EINIT_FAILED        4
#define ECMD_FAILED     5
```

In lvm functions, it treats "return 0" as error case.

if _lvchange_activate() return ECMD_FAILED, the caller _lvchange_activate_single() think as normal:
```
if (!_lvchange_activate(cmd, lv)) <== ECMD_FAILED is 5, won't enter if case.
        return_ECMD_FAILED;

return ECMD_PROCESSED;
```

So in special condition, lvchange cmd will return successfully but cmd executing failed.

the code should be changed to:
```
diff --git a/tools/lvchange.c b/tools/lvchange.c
index f9a0b54e3..ae626a05b 100644
--- a/tools/lvchange.c
+++ b/tools/lvchange.c
@@ -1437,6 +1437,7 @@ static int _lvchange_activate_single(struct cmd_context *cmd,
 {
        struct logical_volume *origin;
        char snaps_msg[128];
+       int rv;
 
        /* FIXME: untangle the proper logic for cow / sparse / virtual origin */
 
@@ -1465,8 +1466,10 @@ static int _lvchange_activate_single(struct cmd_context *cmd,
                }
        }
 
-       if (!_lvchange_activate(cmd, lv))
+       rv = _lvchange_activate(cmd, lv);
+       if (!rv || rv>1) {
                return_ECMD_FAILED;
+       }
 
        return ECMD_PROCESSED;
 }
```

maybe the same problem exist in other places of lvm2 source code.

how to trigger:

there is two nodes env, node1 & node2. they share 1 iSCSI lun disk.
node1 & node2 use systemid to control the shared disk.

1. node1 already use uname to set up systemid.

```
[tb-clustermd1 ~]# pvs --foreign
  PV         VG  Fmt  Attr PSize   PFree  
  /dev/sda   vg1 lvm2 a--  292.00m 260.00m
[tb-clustermd1 ~]# vgs --foreign -o+systemid
  VG  #PV #LV #SN Attr   VSize   VFree   System ID    
  vg1   1   1   0 wz--n- 292.00m 260.00m tb-clustermd1
[tb-clustermd1 ~]# lvs --foreign -o+Host
  LV   VG  Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Host         
  lv1  vg1 -wi-a----- 32.00m                                                     tb-clustermd1
[tb-clustermd1 ~]# dmsetup ls
vg1-lv1 (254:0)
[tb-clustermd1 ~]#
```

2. node2 change the systemid to itself

```
[tb-clustermd2 ~]# vgchange -y --config "local/extra_system_ids='tb-clustermd1'" --systemid tb-clustermd2 vg1
  Volume group "vg1" successfully changed
[tb-clustermd2 ~]# lvchange -ay vg1/lv1
[tb-clustermd2 ~]# dmsetup ls
vg1-lv1 (254:0)
[tb-clustermd2 ~]#
```

3. this time both sides have dm device.
```
[tb-clustermd1 ~]# dmsetup ls
vg1-lv1 (254:0)
[tb-clustermd2 ~]# dmsetup ls
vg1-lv1 (254:0)
``` 

4. node1 executes lvchange cmds. please note the return value is 0
```
[tb-clustermd1 ~]# lvchange -ay vg1/lv1 ; echo $?
  WARNING: Found LVs active in VG vg1 with foreign system ID tb-clustermd2.  Possible data corruption.
  Cannot activate LVs in a foreign VG.
0
[tb-clustermd1 ~]# dmsetup ls
vg1-lv1 (254:0)
[tb-clustermd1 ~]# lvchange -an vg1/lv1 ; echo $?
  WARNING: Found LVs active in VG vg1 with foreign system ID tb-clustermd2.  Possible data corruption.
0
[tb-clustermd1 ~]# dmsetup ls
No devices found
[tb-clustermd1 ~]#
```

Thanks.

_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux