Re: PGs degraded with 3 MONs and 1 OSD node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

BTW, is there a way how to achieve redundancy over multiple OSDs in one box by changing CRUSH map?

Thank you
Jiri

On 20/01/2015 13:37, Jiri Kanicky wrote:
Hi,

Thanks for the reply. That clarifies it. I thought that the redundancy can be achieved with multiple OSDs (like multiple disks in RAID) in case you don't have more nodes. Obviously the single point of failure would be the box.

My current setting is:
osd_pool_default_size = 2

Thank you
Jiri


On 20/01/2015 13:13, Lindsay Mathieson wrote:
You only have one osd node (ceph4). The default replication requirements  for your pools (size = 3) require osd's spread over three nodes, so the data can be replicate on three different nodes. That will be why your pgs are degraded.

You need to either add mode osd nodes or reduce your size setting down to the number of osd nodes you have.

Setting your size to 1 would be a bad idea, there would be no redundancy in your data at all. Loosing one disk would destroy all your data.

The command to see you pool size is:

sudo ceph osd pool get <poolname> size

assuming default setup:

ceph osd pool  get rbd size
returns: 3

On 20 January 2015 at 10:51, Jiri Kanicky <j@xxxxxxxxxx> wrote:
Hi,

I just would like to clarify if I should expect degraded PGs with 11 OSD in one node. I am not sure if a setup with 3 MON and 1 OSD (11 disks) nodes allows me to have healthy cluster.

$ sudo ceph osd pool create test 512
pool 'test' created

$ sudo ceph status
    cluster 4e77327a-118d-450d-ab69-455df6458cd4
     health HEALTH_WARN 512 pgs degraded; 512 pgs stuck unclean; 512 pgs undersized
     monmap e1: 3 mons at {ceph1=172.16.41.31:6789/0,ceph2=172.16.41.32:6789/0,ceph3=172.16.41.33:6789/0}, election epoch 36, quorum 0,1,2 ceph1,ceph2,ceph3
     osdmap e190: 11 osds: 11 up, 11 in
      pgmap v342: 512 pgs, 1 pools, 0 bytes data, 0 objects
            53724 kB used, 9709 GB / 9720 GB avail
                 512 active+undersized+degraded

$ sudo ceph osd tree
# id    weight  type name       up/down reweight
-1      9.45    root default
-2      9.45            host ceph4
0       0.45                    osd.0   up      1
1       0.9                     osd.1   up      1
2       0.9                     osd.2   up      1
3       0.9                     osd.3   up      1
4       0.9                     osd.4   up      1
5       0.9                     osd.5   up      1
6       0.9                     osd.6   up      1
7       0.9                     osd.7   up      1
8       0.9                     osd.8   up      1
9       0.9                     osd.9   up      1
10      0.9                     osd.10  up      1


Thank you,
Jiri
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Lindsay


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux