Dear all,
I am deploying a Ceph system for the first time.
I have 3 servers where I intend to install 1 manager, 1 mon and 12 OSDs
in each.
Since they are used in production already, I selected a single machine
to begin deployment, but got stuck when creating rbd pools.
The host OS is Centos 7, and cephadm allowed me to install Octopus.
These are the commands I have issued so far:
./cephadm add-repo --release octopus
./cephadm install ceph-common
cephadm bootstrap --mon-ip "X.X.X.X" # edited for privacy, real IP used.
ceph orch daemon add osd darkside2:/dev/sdb
This latest add command was repeated 12 times, once for each block
device to be added to Ceph storage.
ceph osd pool create lgcmUnsafe 128 128
Until here, everything seemed fine, no error messages on journalctl or
on /var/log/ceph/cephadm.log. I have run ceph status after each command
and the output seemed consistent.
This command, though, gets stuck forever, with no error or warning
message anywhere:
rbd pool init lgcmUnsafe
I have canceled the command with ctrl+c and issued ceph status. This is
the output:
cluster:
id: 1902a026-496d-11ed-b43e-08c0eb320ec2
health: HEALTH_WARN
Reduced data availability: 128 pgs inactive
Degraded data redundancy: 128 pgs undersized
services:
mon: 1 daemons, quorum darkside2 (age 19h)
mgr: darkside2.umccvh(active, since 19h)
osd: 12 osds: 12 up (since 19h), 12 in (since 4d); 1 remapped pgs
data:
pools: 2 pools, 129 pgs
objects: 13 objects, 0 B
usage: 12 GiB used, 175 TiB / 175 TiB avail
pgs: 99.225% pgs not active
26/39 objects misplaced (66.667%)
128 undersized+peered
1 active+clean+remapped
Could someone more knowledgeable help me debug this, please? Thanks in
advance!
Cordially,
Renata.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx