Re: Can we have mutiple OSD in a single machine

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

You will get something like this:

[osd.0]
        host = ceph-node-1
        btrfs devs = /dev/sda6

[osd.1]
        host = ceph-node-1
        btrfs devs = /dev/sda7

[osd.2]
        host = ceph-node-1
        btrfs devs = /dev/sda8

[osd.3]
        host = ceph-node-1
        btrfs devs = /dev/sda9


[osd.4]
        host = ceph-node-2
        btrfs devs = /dev/sda6

[osd.5]
        host = ceph-node-2
        btrfs devs = /dev/sda7

etc...

But as Tomasz mentions, you get no extra performance, because in most cases the disk is the bottleneck.

Besides I recommend to not use btrfs devs anymore, they want to deprecate that option. So you only get the "osd data =<directory>" option.

If you really want to add performance use more disks or use a fast journal device (I use a SSD).

Stefan




On 04/11/2012 02:42 PM, Madhusudhana U wrote:
Stefan Kleijkers<stefan<at>  unilogicnetworks.net>  writes:

Hello,

Yes that's no problem. I'm using that configuration for some time now.
Just generate a config with multiple OSD clauses with the same node/host.

With the newer ceph version mkcephfs is smart enough to detect the osd's
on the same node and will generate a crushmap whereby the objects get
replicated to different nodes.

I didn't see any impact on the performance (if you have enough
processing power, because you need more of that).

I wanted to use just a few OSD's per node with mdraid, so I could use
RAID6. This way I could swap a faulty disk without bringing the node
down. But I couldn't get it stable with mdraid.

This is how my OSD part in ceph.conf looks like

[osd.0]
         host = ceph-node-1
         btrfs devs = /dev/sda6

[osd.1]
         host = ceph-node-2
         btrfs devs = /dev/sda6

[osd.2]
         host = ceph-node-3
         btrfs devs = /dev/sda6

[osd.3]
         host = ceph-node-4
         btrfs devs = /dev/sda6



Can you please help me how I can add multiple OSD in the same machine
considering that i have 4 partition created for OSD ?

I have powerful machines having 6 quad core Intel Xeon with 48G of RAM







--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux