Re: Multiple disks per server.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mickaël,

I was wondering the same.

When i was reading the Wiki, i found:
http://ceph.newdream.net/wiki/Librados

"Each pool also has a few parameters that define how the object is
stored, namely a replication level (2x, 3x, etc.) and a mapping rule
describing how replicas should be distributed across the storage cluster
(e.g., each replica in a separate rack)."

So there should be a possibility, but i haven't found any configuration
references.

-- 
Met vriendelijke groet,

Wido den Hollander
Hoofd Systeembeheer / CSO
Telefoon Support Nederland: 0900 9633 (45 cpm)
Telefoon Support België: 0900 70312 (45 cpm)
Telefoon Direct: (+31) (0)20 50 60 104
Fax: +31 (0)20 50 60 111
E-mail: support@xxxxxxxxxxxx
Website: http://www.pcextreme.nl
Kennisbank: http://support.pcextreme.nl/
Netwerkstatus: http://nmc.pcextreme.nl


On Tue, 2010-05-04 at 12:18 +0000, Mickaël Canévet wrote:
> Hi,
> 
> I'm testing ceph on 4 old servers.
> 
> As there is more then one disk per server available for data (2 with 6 
> disks and 2 with 10 disks for a total of 32 disks over 4 nodes), I was 
> wondering how to define OSDs.
> 
> I have choice between one OSD per disk (32 OSDs on the cluster) or one 
> OSD per server with one btrfs filesystem over all disks of the server (4 
> OSDs on the cluster). Which one is the best solution ?
> 
> In the first case, if I lose one disk, I lose only a small part of 
> available space. In the other case, if I lose one disk, I lose the whole 
> server (as btrfs filesystem is in stripping) much more space.
> 
> On the other hand, if I lose the whole server in the first case, I can 
> lose all replicates of a data because they may be on two different OSD 
> on the same server.
> 
> Is there a way to define OSD groups so that we can be sure that 2 
> replicates are not on OSDs of the same group (could be usefull for 
> multiple OSDs per server, but also mutliple server per computing room - 
> if I lose one whole room, a lot of server, I will be sure that I have 
> not lost every replicates).
> 
> Thanks a lot.
> Mickaël
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux