Re: Recommended way of leveraging multiple disks by Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Здравствуйте! 

On Tue, Sep 15, 2015 at 04:16:47PM +0000, fangzhe.chang wrote:

> Hi,

> I'd like to run Ceph on a few machines, each of which has multiple disks. The disks are heterogeneous: some are rotational disks of larger capacities while others are smaller solid state disks. What are the recommended ways of running ceph osd-es on them?

> Two of the approaches can be:

> 1)      Deploy an osd instance on each hard disk. For instance, if a machine has six hard disks, there will be six osd instances running on it. In this case, does Ceph's replication algorithm recognize that these osd-es are on the same machine therefore try to avoid placing replicas on disks/osd-es of a same machine?

When adding osd or whenever later You can set crush location for osd. pg placing
is based on Your crush rules and crush locations. In general case, data would be
written to different hosts.

I have confid with multiple disks on 3 nodes, some of them are hdd and 1 ssd per
node. Each serve 1 osd.

> 2)      Create a logical volume spanning multiple hard disks of a machine and run a single copy of osd per machine.

It is more reliable to have several osd'es, one per drive. When loosing drive,
You will not loose all data on host.

> If you have previous experiences, benchmarking results, or know a pointer to the corresponding documentation, please share with me and other users. Thanks a lot.

I preferred this fine article:
http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/

-- 
WBR, Max A. Krasilnikov
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux