Alain Spineux wrote:
On Nov 29, 2007 6:59 AM, Ugo Bellavance <ugob@xxxxxxxx> wrote:
Hi,
This is my current config:
/dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot
/dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2
/dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1
sda,sdd -> 36 GB 10k SCSI HDDs
sdb,sde -> 18 GB 10k SCSI HDDs
I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and
sdf.
What should I do if I want to optimize disk space?
The simplest solution would be to create /dev/md3 out of sdc1 and sdf1,
and add it to the VG, and increase the size of my /vz logical volume.
However, if I could convert that to a RAID5 (it could be possible to
re-install, but I would rather not), I could have 6 drives in RAID5, so
I'd have 5x36 GB (180) of space available total, instead of 3*36 (108).
180, you mean 2 X 5x18
Oh, I just realized I have 2X18 and 4X36. I have 2 other 36 GB HDD
here. Maybe I could have a 6x36 RAID5 this way. Does it matter if I
have 4 HDD that are 10K and 2 7200 rpm?
What about raid 6? I don't think I need fault tolerance for 2 HDD
failures...
Yes and without rebooting :-)
- break the 36GB mirror (using mdadm, faile /dev/sdd2, and then remove it),
- break the 18GB mirror (using mdadm, faile /dev/sde1, and then remove it),
- create sd[cf][123] of 200Mb, 18G, 18G (the 200Mb is useless but to
keep the same partitioning schema)
- create a _degraded_ raid5 with sd[cfd]2 sde1 named /dev/mdX
- vgextend your VolGroup00 to use this new partition.
# pvcreate /dev/mdX
# vgextend VolGroup00 /dev/mdX
- then move all PE on md1 to mdX
# pvmove /dev/md1/dev/mdX
- then remove md1 from the VG
# vgreduce VolGroup00 /dev/md1
- now you dont need md1 anymore, stop it (sorry I'm less skilled with
mdadm command, without manual page )
- now add /dev/sda2 to your _degraded_ raid 5
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos