Hello Sailaja,
Do you still have this problem?
Have you checked the crush rule for your pools to see if the data
distribution rule is met?
Regards, Joachim
___________________________________
Clyso GmbH
Homepage: https://www.clyso.com
Am 24.04.2020 um 16:02 schrieb Sailaja Yedugundla:
I am trying to setup a single node cluster using cephadm. I was able to start the cluster with 1 monitor and 3 osds. When I try to create users for rados gateway using the command radosgw-admin user create, the command hangs. Here is my cluster status.
cluster:
id: 5a03d7e2-85e4-11ea-bba9-021f94750a41
health: HEALTH_WARN
Reduced data availability: 44 pgs inactive
Degraded data redundancy: 6/15 objects degraded (40.000%), 2 pgs degraded, 80 pgs undersized
services:
mon: 1 daemons, quorum ip-172-31-9-253.ec2.internal (age 9h)
mgr: ip-172-31-9-253.ec2.internal.umudgc(active, since 9h)
osd: 3 osds: 3 up (since 74m), 3 in (since 74m); 41 remapped pgs
data:
pools: 4 pools, 97 pgs
objects: 5 objects, 1.2 KiB
usage: 3.0 GiB used, 27 GiB / 30 GiB avail
pgs: 45.361% pgs not active
6/15 objects degraded (40.000%)
2/15 objects misplaced (13.333%)
44 undersized+peered
24 active+undersized+remapped
17 active+clean
10 active+undersized
2 active+undersized+degraded
progress:
Rebalancing after osd.1 marked in (74m)
[=========...................] (remaining: 2h)
Rebalancing after osd.2 marked in (74m)
[............................]
Can someone help me resolving this issue.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx