2.02.98: lvs returns "dm_report_object: report function failed for field data_percent" after issuing lvchange --discards

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi there,

I noticed that there is a transient and benign (I can't discern any adverse effects) error reported by lvs after running lvchange --discards passdown. I wrote a simple script that eventually reproduces it (at the bottom of the email).

I have also been performing these tests *without* thin_check being enabled on volume activation. I don't know if it makes any difference, but with thin_check I noticed that I frequently get device busy errors or that lvchange can't change the state of an active volume (even though lvchange -a n is issued for the pool beforehand).

The error itself is: dm_report_object: report function failed for field data_percent. The output of the lvs command at that point is:
LV                                                   VG         Attr      LSize  Pool Origin      Data%  Move Log Copy%  Convert
TrollVolume                                          TrollGroup Vwi-i-tz- 23.90g pool            
pool                                                 TrollGroup twi-a-tz- 23.90g                    2.60                        
troll_snapshot_3_11_941fa1b380d748c5a5ee446ee5817960 TrollGroup Vwi---tz- 23.90g pool TrollVolume                               
troll_snapshot_3_13_e48750c4c0cc47e88a80f278cee8616d TrollGroup Vwi---tz- 23.90g pool TrollVolume                               
troll_snapshot_3_14_075ec8acaebb4964a1afa62c53fff7de TrollGroup Vwi---tz- 23.90g pool TrollVolume                               
troll_snapshot_3_17_59135a4db9c74fc8bb78346d976b6ba2 TrollGroup Vwi---tz- 23.90g pool TrollVolume                               
troll_snapshot_3_19_965c49f78f504e1cbe897f7de8c2e07d TrollGroup Vwi---tz- 23.90g pool TrollVolume                               
troll_snapshot_3_4_e77e18ec7faf4b8b987999588d3e01b7  TrollGroup Vwi---tz- 23.90g pool TrollVolume                               
troll_snapshot_3_5_d50e703f9cd94a1c93d77055fe4db2a8  TrollGroup Vwi---tz- 23.90g pool TrollVolume                               
troll_snapshot_3_7_80f0c30ae46e4bc696b650327b9923a0  TrollGroup Vwi---tz- 23.90g pool TrollVolume

Note that the first volume, TrollVolume, is in the inactive table state (mapped device present with (i)nactive table). It looks like it's transitioning into the active state at this point and there seems to be a race?

I'm doing all of this with kernel 3.8.0-32-generic on Ubuntu 12.04 LTS (lvm tools were backported to the 12.04 release).

Does anyone know if this is a known issue and whether an upgrade to a newer version of lvm tools (or kernel) would fix this?

One more question: I observe that lvchange --discards passdown activates the pool and all volumes in it. It seems that one does not have to explicitly do lvchange -a y after disabling the volumes and issuing lvchange --discards. Does that seem right? I couldn't find any documentation about this, however.

Thank you,
Timur

The script:

#!/bin/bash

set -e

while true; do
        echo 'deactivate'
        # /tmp/volumes is populated with the output of ls /dev/test_group
        for i in `cat /tmp/volumes`; do sudo lvchange -a n test_group/$i; done
        echo 'ignore'
        sudo lvchange --discards ignore test_group/pool
        echo 'passdown'
        sudo lvchange --discards passdown test_group/pool
        echo 'lvs'
        sudo lvs
        echo 'activate'
        for i in `cat /tmp/volumes`; do sudo lvchange -a y test_group/$i; done
done
_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux