Hi,
BTW, is there a way how to achieve redundancy over multiple OSDs in
one box by changing CRUSH map?
Thank you
Jiri
On 20/01/2015 13:37, Jiri Kanicky
wrote:
Hi,
Thanks for the reply. That clarifies it. I thought that the
redundancy can be achieved with multiple OSDs (like multiple disks
in RAID) in case you don't have more nodes. Obviously the single
point of failure would be the box.
My current setting is:
osd_pool_default_size = 2
Thank you
Jiri
On 20/01/2015 13:13, Lindsay
Mathieson wrote:
You only have one osd node (ceph4). The default
replication requirements for your pools (size =
3) require osd's spread over three nodes, so the
data can be replicate on three different nodes.
That will be why your pgs are degraded.
You need to either add mode osd nodes or reduce your
size setting down to the number of osd nodes you
have.
Setting your size to 1 would be a bad idea, there
would be no redundancy in your data at all. Loosing
one disk would destroy all your data.
The command to see you pool size is:
sudo ceph osd pool get <poolname> size
assuming default setup:
ceph osd pool get rbd size
returns: 3
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com