On 10/11/2013 10:25 AM, Ansgar Jazdzewski wrote:
Hi,
i updated my cluster yesterday an all is gone well.
But Today i got an error i never seen before.
-----
# ceph health detail
HEALTH_ERR 1 pgs inconsistent; 1 scrub errors
pg 2.5 is active+clean+inconsistent, acting [9,4]
1 scrub errors
-----
any idea to fix it?
after i did the up grade i created a new pool with a higher pg_num
(rbd_new 1024)
-----
# ceph osd dump | grep rep\ size
pool 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins
pg_num 64 pgp_num 64 last_change 1 owner 0 crash_replay_interval 45
pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash
rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0
pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins
pg_num 64 pgp_num 64 last_change 1 owner 0
pool 3 'rbd_new' rep size 2 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 1024 pgp_num 1024 last_change 2604 owner 0
-----
but i guess this can cause the error?
No, since the PG is in the pool 'rbd'.
The PG number is always prefixed with the pool ID, so in this case pool
2 is 'rbd'.
I recommend you try repairing the PG, see:
http://ceph.com/docs/master/rados/operations/control/
Wido
Thanks for any help
Ansgar
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Wido den Hollander
42on B.V.
Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com