I'm creating my lvm volume on amazon ec2 in such a way:
mdadm --verbose --create /dev/md0 --level=0 --chunk=256 --raid-devices=/dev/sdh1 /dev/sdh2 /dev/sdh3 /dev/sdh4
blockdev --setra 128 /dev/md0
blockdev --setra 128 /dev/sdh1
blockdev --setra 128 /dev/sdh2
blockdev --setra 128 /dev/sdh3
blockdev --setra 128 /dev/sdh4
dd if=/dev/zero of=/dev/md0 bs=512 count=1
pvcreate /dev/md0
vgcreate vg0 /dev/md0
lvcreate -l 100%vg -n data vg0
mke2fs -t ext4 -F /dev/vg0/data
echo '/dev/vg0/data /data ext4 defaults,auto,noatime,nodiratime,noexec 0 0' | tee -a /etc/fstab
mount /data
so I have 4 ebs volumes : /dev/sdh1-4
to back it up, I'm dismounting /data (to make snapshots consistent) and making ebs snapshots of each volume
to restore I'm just attaching volumes created from snapshots (not taking care of order) and issuing :
vgchange -ay
and magically data is restored and everything's fine (just need to mount /dev/vg0/data)
Am I doing everything correctly, does the order of volumes matters?
_______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/