Hello,
Yes that's no problem. I'm using that configuration for some time now.
Just generate a config with multiple OSD clauses with the same node/host.
With the newer ceph version mkcephfs is smart enough to detect the osd's
on the same node and will generate a crushmap whereby the objects get
replicated to different nodes.
I didn't see any impact on the performance (if you have enough
processing power, because you need more of that).
I wanted to use just a few OSD's per node with mdraid, so I could use
RAID6. This way I could swap a faulty disk without bringing the node
down. But I couldn't get it stable with mdraid.
Stefan
On 04/11/2012 09:42 AM, Madhusudhana U wrote:
Hi all,
I have a system with 2T SATA drive and I want to add it to my ceph
cluster. I was thinking instead of creating one large OSD, can't
I have 44 osd's of 450G each ? Is this possible ? if possible, will
this improve read/write performance ?
Thanks
__Madhusudhana
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html