Re: LVM hatred, was Re: /boot on a separate partition?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On 06/25/2015 01:20 PM, Chris Adams wrote:
...It's basically a way to assemble one arbitrary set of block devices and then divide them into another arbitrary set of block devices, but now separate from the underlying physical structure.

Regular partitions have various limitations (one big one on Linux being that modifying the partition table of a disk with in-use partitions is a PITA and most often requires a reboot), and LVM abstracts away some of them. ....

I'll give an example. I have a backup server, and for various reasons (hardlinks primarily) all the data needs to be in a single filesystem. However, this is running on an older VMware ESX server, and those have a 2TB LUN size limit. So, even though my EMC Clariion arrays can deal with 10TB LUNs without issue, the VMware ESX and all of its guests cannot. So, I have a lot of RDMs for the guests. The backup server's LVM looks like this:
[root@backup-rdc ~]# pvscan
  PV /dev/sdd1   VG vg_opt       lvm2 [1.95 TB / 0    free]
  PV /dev/sde1   VG vg_opt       lvm2 [1.95 TB / 0    free]
  PV /dev/sdf1   VG vg_opt       lvm2 [1.95 TB / 0    free]
  PV /dev/sda2   VG VolGroup00   lvm2 [39.88 GB / 0    free]
  PV /dev/sdg1   VG bak-rdc    lvm2 [1.95 TB / 0    free]
  PV /dev/sdh1   VG bak-rdc    lvm2 [1.95 TB / 0    free]
  PV /dev/sdi1   VG bak-rdc    lvm2 [1.95 TB / 0    free]
  PV /dev/sdj1   VG bak-rdc    lvm2 [1.95 TB / 0    free]
  PV /dev/sdk1   VG bak-rdc    lvm2 [1.47 TB / 0    free]
  PV /dev/sdl1   VG bak-rdc    lvm2 [1.47 TB / 0    free]
  PV /dev/sdm1   VG bak-rdc    lvm2 [1.95 TB / 0    free]
  PV /dev/sdn1   VG bak-rdc    lvm2 [1.95 TB / 0    free]
  PV /dev/sdo1   VG bak-rdc    lvm2 [1.95 TB / 0    free]
  PV /dev/sdp1   VG bak-rdc    lvm2 [1.95 TB / 0    free]
  PV /dev/sdq1   VG bak-rdc    lvm2 [1.95 TB / 0    free]
  PV /dev/sdr1   VG bak-rdc    lvm2 [1.95 TB / 0    free]
  PV /dev/sdb1   VG bak-rdc    lvm2 [1.95 TB / 0    free]
  PV /dev/sdc1   VG bak-rdc    lvm2 [1.95 TB / 0    free]
  Total: 18 [32.27 TB] / in use: 18 [32.27 TB] / in no VG: 0 [0   ]
[root@backup-rdc ~]# lvscan
  ACTIVE            '/dev/vg_opt/lv_backups' [5.86 TB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol00' [37.91 GB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol01' [1.97 GB] inherit
  ACTIVE            '/dev/bak-rdc/cx3-80' [26.37 TB] inherit
[root@backup-rdc ~]#

It's just beautiful the way I can take another 1.95 TB LUN, add it to the volume group, expand the logical volume, and then expand the underlying filesystem (XFS) and just dynamically add storage. Being on an EMC Clariion foundation, I don't have to worry about the RAID, either, as the RAID6 and hotsparing is done by the array. SAN and LVM were made for each other. And, if and when I either migrate the guest over to physical hardware on the same SAN or migrate to some other virtualization, I can use LVM's tools to migrate from all those 1.95 and 1.47 TB LUNs over to a few larger LUNs and blow away the smaller LUNs while the system is online. And the EMC Clariion FLARE OE software allows me great flexibility in moving LUNs around in the array for performance and other reasons.

_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos



[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux