Re: Ceph status HEALT_WARN - pgs problems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

please add some more output, e.g.

ceph -s
ceph osd tree
ceph osd pool ls detail
ceph osd crush rule dump (of the used rulesets)

You have the pg_autoscaler enabled, you don't need to deal with pg_num manually.


Zitat von Dominique Ramaekers <dominique.ramaekers@xxxxxxxxxx>:

Hi,

My cluster is up and running. I saw a note in ceph status that 1 pg was undersized. I read about the amount of pgs and the recommended value (OSD's*100/poolsize => 6*100/3 = 200). The pg_num should be raised carfully, so I raised it to 2 and ceph status was fine again. So I left it like it was.

Than I created a new pool: libvirt-pool.

Now ceph status is again in warning regarding pgs. I raised pg_num_max of the libvirt_pool to 265 and pg_num to 128.

Ceph status stays in warning.
root@hvs001:/# ceph status
...
    health: HEALTH_WARN
            Reduced data availability: 64 pgs inactive
            Degraded data redundancy: 68 pgs undersized
...
   pgs:     94.118% pgs not active
4/6 objects misplaced (66.667%) -This is there from the beginning of the creation of the cluster-
             64 undersized+peered
             4  active+undersized+remapped

I also get a progress: global Recovery Event (0s) which only go's away with 'ceph progress clear'

My autoscale-status is the following:
root@hvs001:/# ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE BULK .mgr 576.5k 3.0 1743G 0.0000 1.0 1 on False libvirt-pool 0 3.0 1743G 0.0000 1.0 64 on False

(It's a 3 node cluster with 2 OSD's per node.)

The documentation doesn't help me much here. What should I do?

Greetings,

Dominique.


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux