Re: how to refresh LV to apply fstrim online

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 25, 2016 at 4:12 PM, Zdenek Kabelac <zkabelac@xxxxxxxxxx> wrote:

t giving downtime, in a safe way.


Normally it's not advised  to use 'dmsetup' command for  LV.
The above sequence should be equivalent to:

lvchange --refresh  vg/lv
(or  vgchange --refresh vg  -  doing it for every active LV in VG)

It's unclear how this could help  - unless you we doing some 'pvmove'
operations (which might be worth a BZ).

You should collect all states  while it 'DOES NOT' work.
And then  run  the --refresh (which you thing it's fixing it for your)
ATM I'm clueless how you get mapping without TRIM where --refresh can fix it.


Regards

Zdenek




When I rescan the single devices (the legs) I do have then 
[root@dbatest1 ~]# for dev in sdea sdem ; do grep "" /sys/block/${dev}/queue/discard_granularity; done
1048576
1048576
[root@dbatest1 ~]#

But the problem is the device mapper device of the LV itself, while the multipath one is ok.
I had tried with both 

vgchange --refresh VG_TEST_REDO
and
lvchange --refresh VG_TEST_REDO/LV_TEST_REDO

BTW: The VG contains only this LV

yes the current device has been the target of a pvmove operation, where initially the LUN was not configured as thin provisioned.

but the underlying dm device still contains 0 in its granularity. And the fstrim command fails.
in this new test it is dm-14 (while the multipath device of the related PV is dm-47 and it is ok)

[root@dbatest1 ~]# multipath -l /dev/mapper/3600a098038303769752b4951473778673600a098038303769752b495147377867 dm-47 NETAPP,LUN C-Mode
size=2.0G features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 7:0:5:21 sdea 128:32  active undef running
  `- 8:0:5:21 sdem 128:224 active undef running
[root@dbatest1 ~]# 

[root@dbatest1 ~]# ll /dev/VG_TEST_REDO/LV_TEST_REDO
lrwxrwxrwx 1 root root 8 Oct 25 17:25 /dev/VG_TEST_REDO/LV_TEST_REDO -> ../dm-14
[root@dbatest1 ~]#


[root@dbatest1 ~]# cat /sys/block/dm-47/queue/discard_granularity
1048576
[root@dbatest1 ~]# 


[root@dbatest1 ~]# cat /sys/block/dm-14/queue/discard_granularity
0
[root@dbatest1 ~]#

So the problem seems to find a way to get the discard_granularity inside dm-14.
One way to obtain it is my method:

[root@dbatest1 ~]# dmsetup info /dev/dm-14
Name:              VG_TEST_REDO-LV_TEST_REDO
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      253, 14
Number of targets: 1
UUID: LVM-ebT3iin7JZdFZCoR05NEPpXcDosNQ3Y46HwejMdu0o9qfeeeMcRTemcuGtUVqMds

[root@dbatest1 ~]# dmsetup table /dev/dm-14 > my_dm_table

[root@dbatest1 ~]# dmsetup suspend /dev/dm-14 ; dmsetup reload /dev/dm-14 my_dm_table ; dmsetup resume /dev/dm-14
[root@dbatest1 ~]#

And now
[root@dbatest1 ~]# cat /sys/block/dm-14/queue/discard_granularity
1048576
[root@dbatest1 ~]# 

and
[root@dbatest1 ~]# fstrim /TEST/redolog/
[root@dbatest1 ~]# 

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux