Multiple disks per server.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm testing ceph on 4 old servers.

As there is more then one disk per server available for data (2 with 6 disks and 2 with 10 disks for a total of 32 disks over 4 nodes), I was wondering how to define OSDs.

I have choice between one OSD per disk (32 OSDs on the cluster) or one OSD per server with one btrfs filesystem over all disks of the server (4 OSDs on the cluster). Which one is the best solution ?

In the first case, if I lose one disk, I lose only a small part of available space. In the other case, if I lose one disk, I lose the whole server (as btrfs filesystem is in stripping) much more space.

On the other hand, if I lose the whole server in the first case, I can lose all replicates of a data because they may be on two different OSD on the same server.

Is there a way to define OSD groups so that we can be sure that 2 replicates are not on OSDs of the same group (could be usefull for multiple OSDs per server, but also mutliple server per computing room - if I lose one whole room, a lot of server, I will be sure that I have not lost every replicates).

Thanks a lot.
Mickaël
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux