Re: What is the proper way to grow LVM/GFS volumes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I did. After running lvextend command 'clvmd -R' failed on node2. Then I fenced the node2 (fenmrdev02) and run the clvmd -R again, this time without problems. I tried lvextend again - I get the same problem, and again clvmd -R times out.

On Tue, Nov 18, 2008 at 4:00 PM, Finnur Örn Guðmundsson <finnzi@xxxxxxxxxx> wrote:
Hi,

Try to run: clvmd -R on one of the nodes.

Bgrds,
Finnur

> Here is the update:
>
> [root@fenmrdev03 ~]# lvextend  -l+100%FREE  /dev/nuvg4/nulv4
>   Extending logical volume nulv4 to 10.00 GB
>   Error locking on node fenmrdev04: device-mapper: create ioctl failed:
> Device or resource busy
>   Error locking on node fenmrdev03: device-mapper: create ioctl failed:
> Device or resource busy
>   Failed to suspend nulv4
>
>
> On Tue, Nov 18, 2008 at 3:15 PM, Scott Wilson <swilson@xxxxxxxxxxxx>
> wrote:
>
>>
>> I think you need:
>>
>> lvextend  -l+100%FREE  /dev/nuvg4/nulv4
>>            ^
>>
>> Without the +, you were trying to set the logical volume size to the
>> size
>> of your free space, not adding the free space to the size.
>>
>>
>> Scott Wilson                    Lead System Administrator
>> swilson@xxxxxxxxxxxx            NSIT - DCS - SeaUnix
>>
>>
>> On Tue, 18 Nov 2008, Alan A wrote:
>>
>>  I tried a few times to grow LVM by adding additional PV to VG, and then
>>> executing 'lvextend' command. I am not sure what I am doing wrong but I
>>> get
>>> the message that there is error locking on one of the nodes, and then
>>> the
>>> GFS hangs.
>>>
>>> Here are some of the details:
>>>
>>> [root@fenmrdev03 ~]# vgs
>>>  VG         #PV #LV #SN Attr   VSize  VFree
>>>  VolGroup00   2   2   0 wz--n- 50.75G    0
>>>  gfs_sda1     1   1   0 wz--nc 10.00G 5.00G
>>>  gfs_sdb1     1   1   0 wz--nc 10.00G 3.00G
>>>  nuvg4        2   1   0 wz--nc 10.00G 5.00G
>>> [root@fenmrdev03 ~]# lvs
>>>  LV       VG         Attr   LSize  Origin Snap%  Move Log Copy%
>>> Convert
>>>  LogVol00 VolGroup00 -wi-ao 40.81G
>>>  LogVol01 VolGroup00 -wi-ao  9.94G
>>>  gfs_sda1 gfs_sda1   -wi-ao  5.00G
>>>  gfs_sdb1 gfs_sdb1   -wi-ao  7.00G
>>>  nulv4    nuvg4      -wi-ao  5.00G
>>> [root@fenmrdev03 ~]# pvs
>>>  PV                VG         Fmt  Attr PSize  PFree
>>>  /dev/cciss/c0d0p2 VolGroup00 lvm2 a-   33.81G    0
>>>  /dev/cciss/c0d1p1 VolGroup00 lvm2 a-   16.94G    0
>>>  /dev/sda1         gfs_sda1   lvm2 a-   10.00G 5.00G
>>>  /dev/sdb1         gfs_sdb1   lvm2 a-   10.00G 3.00G
>>>  /dev/sdc          nuvg4      lvm2 a-    5.00G    0
>>>  /dev/sdd          nuvg4      lvm2 a-    5.00G 5.00G
>>>
>>>
>>> Command I tried:
>>> [root@fenmrdev03 ~]# lvextend  -l100%FREE  /dev/nuvg4/nulv4
>>>
>>>
>>>
>>>
>>> --
>>> Alan A.
>>>
>>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster@xxxxxxxxxx
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>
>
>
> --
> Alan A.
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
Alan A.
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux