Re: Can we have mutiple OSD in a single machine

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Stefan Kleijkers <stefan <at> unilogicnetworks.net> writes:

> 
> Hello,
> 
> Yes that's no problem. I'm using that configuration for some time now. 
> Just generate a config with multiple OSD clauses with the same node/host.
> 
> With the newer ceph version mkcephfs is smart enough to detect the osd's 
> on the same node and will generate a crushmap whereby the objects get 
> replicated to different nodes.
> 
> I didn't see any impact on the performance (if you have enough 
> processing power, because you need more of that).
> 
> I wanted to use just a few OSD's per node with mdraid, so I could use 
> RAID6. This way I could swap a faulty disk without bringing the node 
> down. But I couldn't get it stable with mdraid.
> 
This is how my OSD part in ceph.conf looks like

[osd.0]
        host = ceph-node-1
        btrfs devs = /dev/sda6

[osd.1]
        host = ceph-node-2
        btrfs devs = /dev/sda6

[osd.2]
        host = ceph-node-3
        btrfs devs = /dev/sda6

[osd.3]
        host = ceph-node-4
        btrfs devs = /dev/sda6



Can you please help me how I can add multiple OSD in the same machine 
considering that i have 4 partition created for OSD ?

I have powerful machines having 6 quad core Intel Xeon with 48G of RAM







--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux