Ceph error after upgrade Argonaut to Bobtail to Cuttlefish

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

i updated my cluster yesterday an all is gone well.
But Today i got an error i never seen before.

-----
# ceph health detail
HEALTH_ERR 1 pgs inconsistent; 1 scrub errors
pg 2.5 is active+clean+inconsistent, acting [9,4]
1 scrub errors
-----

any idea to fix it?

after i did the up grade i created a new pool with a higher pg_num (rbd_new 1024)

-----
# ceph osd dump | grep rep\ size
pool 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 crash_replay_interval 45
pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0
pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0
pool 3 'rbd_new' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 2604 owner 0
-----

but i guess this can cause the error?

Thanks for any help
Ansgar

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux