Re: how to refresh LV to apply fstrim online

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Fri, Oct 21, 2016 at 1:32 AM, Gianluca Cecchi <gianluca.cecchi@xxxxxxxxx> wrote:

On Thu, Oct 20, 2016 at 1:40 PM, Zdenek Kabelac <zkabelac@xxxxxxxxxx> wrote:

Hi

Please provide listing of all your 'multipath'  leg devices - are
they support TRIM ?
Then 'check' dm device.

See  (and attach)

 grep "" /sys/block/*/queue/discard_granularity


Also make sure you are NOT using 'ext2' filesystem which does not support discard on RHEL6 and you are on latest available RHEL6 kernel.


Regards

Zdenek



Hello,
thanks for answer.
I'm using ext3 filesystem that supports discard.
Currently I'm on this kernel
 
[root@dbatest1 ~]# uname -r
2.6.32-431.29.2.el6.x86_64
[root@dbatest1 ~]# 


It seems that I was myself able to find a way to online refresh the logical volume and so to successfully run fstrim command against the related file system, without deactivating the lv and most important without generating downtime for my users.
Please note that what I'm doing is working on a test system where I have the same situation as in production.
Can you certify my approach and comment about it, so that I can eventually apply in production?


[snip]
 

[root@dbatest1 ~]# dmsetup suspend /dev/dm-4 ; dmsetup reload /dev/dm-4 my_dm_table ; dmsetup resume /dev/dm-4
[root@dbatest1 ~]# echo $?
0
[root@dbatest1 ~]#

And now the magic:

[root@dbatest1 ~]# fstrim /ALM/rdoffline

[snip]
 


I hope I wasn't too rude in asking "certification"... actually I meant comment about the method and if anything nasty (eg the instant suspend phase command...) could happen with that method....

If it is  anyway important, I have many multipath devices/ filsystems on this test server and I can proceed to test one by one.
The LUNs indeed support trim.

In particular if I take one of the lun that yet needs to be trimmed, I have:


[g.cecchi@dbatest1 ~]$ sudo multipath -l 3600a098038303769752b495147377867
3600a098038303769752b495147377867 dm-47 NETAPP,LUN C-Mode
size=2.0G features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 7:0:5:21 sdea 128:32  active undef running
  `- 8:0:5:21 sdem 128:224 active undef running
[g.cecchi@dbatest1 ~]$ 


[g.cecchi@dbatest1 ~]$ for dev in sdea sdem
> do
> grep "" /sys/block/${dev}/queue/discard_granularity
> done
0
0
[g.cecchi@dbatest1 ~]$ 

and for the multipath device itself

[g.cecchi@dbatest1 ~]$ cat /sys/block/dm-47/queue/discard_granularity 
0
[g.cecchi@dbatest1 ~]$ 


On devices where I was already able to reload/trim (previous post for example), I had 0 too before as result, but now I have for example:

[g.cecchi@dbatest1 ~]$ for dev in sddt sdef; do grep "" /sys/block/${dev}/queue/discard_granularity; done
1048576
1048576
[g.cecchi@dbatest1 ~]$ 

and also

[g.cecchi@dbatest1 ~]$ cat /sys/block/dm-4/queue/discard_granularity 
1048576
[g.cecchi@dbatest1 ~]$ 

Hope I clarified ....

I don't understand though what you meant with 'check' dm device...

Thanks in advice for any help understanding if this can be the best workflow to be able to run fstrim without giving downtime, in a safe way.

Gianluca
--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux