Re: Ceph Ansible Repo

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/03/14 13:19, Gandalf Corvotempesta wrote:
> 2014-03-06 13:07 GMT+01:00 David McBride <dwm37@xxxxxxxxx>:
>> This causes the IO load to be nicely balanced across the two SSDs,
>> removing any hot spots, at the cost of enlarging the failure domain of
>> the loss of an SSD from half a node to a full node.
> 
> This is not a solution for me.
> Why not using LVM with a VG striped across both SSD ?
> I've never used LVM without raid, what happens in case of failure
> of a phisical disks?  The whole VG is lost ?

Yes.  A stripe-set depends on all of the members of an array, whether
managed through MD or LVM.

Thus, in a machine with two SSDs, which are striped together, the loss
of *either* SSD will cause all of the OSDs hosted by that machine to be
lost.

(Note: if you want to use LVM rather than GPTs on MD, you will probably
need to remove the '|dm-*' clause from the Ceph udev rules that govern
OSD assembly before they will work as expected.)

Kind regards,
David
-- 
David McBride <dwm37@xxxxxxxxx>
Unix Specialist, University Computing Service
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux