Re: Ceph Degraded

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Andrei!

I had a similar setting with replicated size 2 and min_size also 2.

Changing that didn't change the status of the cluster.

I 've also tried to remove the pools and recreate them without success.

Removing and re-adding the OSDs also didn't have any influence!

Therefore and since I didn't have any data at all I performed a force recreate on all PGs and after that things went back to normal.

Thanks for your reply!


Best,


George

On Sat, 29 Nov 2014 11:39:51 +0000 (GMT), Andrei Mikhailovsky wrote:
I think I had a similar issue recently when I've added a new pool. All
pgs that corresponded to the new pool were shown as degraded/unclean.
After doing a bit of testing I've realized that my issue was down to
this:

replicated size 2
min_size 2

replicated size and min size was the same. In my case, i've got 2 osd
servers with total replica of 2. The minimal size should be set to 1 -
so that the cluster would still work with at least one PG being up.

After I've changed the min_size to 1 the cluster sorted itself out.
Try doing this for your pools.

Andrei

-------------------------

FROM: "Georgios Dimitrakakis"
TO: ceph-users@xxxxxxxxxxxxxx
SENT: Saturday, 29 November, 2014 11:13:05 AM
SUBJECT:  Ceph Degraded

Hi all!

I am setting UP a new cluster with 10 OSDs
and the state is degraded!

# ceph health
HEALTH_WARN 940 pgs degraded; 1536 pgs stuck unclean
#

There are only the default pools

# ceph osd lspools
0 data,1 metadata,2 rbd,

with each one having 512 pg_num and 512 pgp_num

# ceph osd dump | grep replic
pool 0 'data' replicated size 2 min_size 2 crush_ruleset 0
object_hash
rjenkins pg_num 512 pgp_num 512 last_change 286 flags hashpspool
crash_replay_interval 45 stripe_width 0
pool 1 'metadata' replicated size 2 min_size 2 crush_ruleset 0
object_hash rjenkins pg_num 512 pgp_num 512 last_change 287 flags
hashpspool stripe_width 0
pool 2 'rbd' replicated size 2 min_size 2 crush_ruleset 0
object_hash
rjenkins pg_num 512 pgp_num 512 last_change 288 flags hashpspool
stripe_width 0

No data yet so is there something I can do to repair it as it is?

Best regards,

George
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

--
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux