Hi,
first, you can bootstrap a cluster by providing the container image
path in the bootstrap command like this:
cephadm --image *<hostname>*:5000/ceph/ceph bootstrap --mon-ip *<mon-ip>*
Check out the docs for an isolated environment [1], I don't think it's
a good idea to change the runtime the way you did. The container paths
are configurable, for example you can set it like this:
ceph config set global container_image <local-registry>:5000/my/ceph/image
And then your subject seems wrong, you write "mgr not available", but
from the logs you paste this:
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/15)...
mgr not available, waiting (2/15)...
mgr not available, waiting (3/15)...
mgr not available, waiting (4/15)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for mgr epoch 5...
mgr epoch 5 is available
So the mgr seems to work, it's your bootstrap host that is not ready
to be managed by cephadm:
pp0101.fst/ceph/ceph:v16.2.7 orch host add opcpmfpsbpp0101 10.20.23.65
/usr/bin/ceph: stderr Error EINVAL: Failed to connect to opcpmfpsbpp0101
(10.20.23.65).
/usr/bin/ceph: stderr Please make sure that the host is reachable and
accepts connections using the cephadm SSH key
Is your host reachable and did you configure SSH access?
[1]
https://docs.ceph.com/en/latest/cephadm/install/#deployment-in-an-isolated-environment
Zitat von farhad kh <farhad.khedriyan@xxxxxxxxx>:
hi
i want used private registry for running cluster ceph storage and i changed
default registry my container runtime (docker)
/etc/docker/deamon.json
{
"registery-mirrors": ["https://private-registery.fst"]
}
and all registry addres in /usr/sbin/cephadm(quay.ceph.io and docker.io to
my private registry cat /usr/sbin/cephadm | grep private-registery.fst
DEFAULT_IMAGE = 'private-registery.fst/ceph/ceph:v16.2.7'
DEFAULT_PROMETHEUS_IMAGE = 'private-registery.fst/ceph/prometheus:v2.18.1'
DEFAULT_NODE_EXPORTER_IMAGE =
'private-registery.fst/ceph/node-exporter:v0.18.1'
DEFAULT_ALERT_MANAGER_IMAGE =
'private-registery.fst/ceph/alertmanager:v0.20.0'
DEFAULT_GRAFANA_IMAGE = 'private-registery.fst/ceph/ceph-grafana:6.7.4'
DEFAULT_HAPROXY_IMAGE = 'private-registery.fst/ceph/haproxy:2.3'
DEFAULT_KEEPALIVED_IMAGE = 'private-registery.fst/ceph/keepalived'
DEFAULT_REGISTRY = 'private-registery.fst' # normalize unqualified
digests to this
>>> normalize_image_digest('ceph/ceph:v16', 'private-registery.fst')
>>> normalize_image_digest('private-registery.fst/ceph/ceph:v16',
'private-registery.fst')
'private-registery.fst/ceph/ceph:v16'
>>> normalize_image_digest('private-registery.fst/ceph',
'private-registery.fst')
>>> normalize_image_digest('localhost/ceph', 'private-registery.fst')
when i try deply first node of cluseter with cephadm i have this error
cephadm bootstrap --mon-ip 10.20.23.65 --allow-fqdn-hostname
--initial-dashboard-user admin --initial-dashboard-password admin
--dashboard-password-noupdate
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
docker (/bin/docker) is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: e52bee78-db8b-11ec-9099-00505695f8a8
Verifying IP 10.20.23.65 port 3300 ...
Verifying IP 10.20.23.65 port 6789 ...
Mon IP `10.20.23.65` is in CIDR network `10.20.23.0/24`
- internal network (--cluster-network) has not been provided, OSD
replication will default to the public_network
Pulling container image private-registery.fst/ceph/ceph:v16.2.7...
Ceph version: ceph version 16.2.7
(dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network to 10.20.23.0/24
Wrote config to /etc/ceph/ceph.conf
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Creating mgr...
Verifying port 9283 ...
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/15)...
mgr not available, waiting (2/15)...
mgr not available, waiting (3/15)...
mgr not available, waiting (4/15)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for mgr epoch 5...
mgr epoch 5 is available
Setting orchestrator backend to cephadm...
Generating ssh key...
Wrote public SSH key to /etc/ceph/ceph.pub
Adding key to root@localhost authorized_keys...
Adding host opcpmfpsbpp0101...
Non-zero exit code 22 from /bin/docker run --rm --ipc=host
--stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e
CONTAINER_IMAGE=private-registery.fst/ceph/ceph:v16.2.7 -e
NODE_NAME=opcpmfpsbpp0101 -e CEPH_USE_RANDOM_NONCE=1 -v
/var/log/ceph/e52bee78-db8b-11ec-9099-00505695f8a8:/var/log/ceph:z -v
/tmp/ceph-tmpwt99ep2e:/etc/ceph/ceph.client.admin.keyring:z -v
/tmp/ceph-tmpweojwqdh:/etc/ceph/ceph.conf:z opkbhfpsb
pp0101.fst/ceph/ceph:v16.2.7 orch host add opcpmfpsbpp0101 10.20.23.65
/usr/bin/ceph: stderr Error EINVAL: Failed to connect to opcpmfpsbpp0101
(10.20.23.65).
/usr/bin/ceph: stderr Please make sure that the host is reachable and
accepts connections using the cephadm SSH key
/usr/bin/ceph: stderr
/usr/bin/ceph: stderr To add the cephadm SSH key to the host:
/usr/bin/ceph: stderr > ceph cephadm get-pub-key > ~/ceph.pub
/usr/bin/ceph: stderr > ssh-copy-id -f -i ~/ceph.pub root@10.20.23.65
/usr/bin/ceph: stderr
/usr/bin/ceph: stderr To check that the host is reachable open a new shell
with the --no-hosts flag:
/usr/bin/ceph: stderr > cephadm shell --no-hosts
/usr/bin/ceph: stderr
/usr/bin/ceph: stderr Then run the following:
/usr/bin/ceph: stderr > ceph cephadm get-ssh-config > ssh_config
/usr/bin/ceph: stderr > ceph config-key get mgr/cephadm/ssh_identity_key >
~/cephadm_private_key
/usr/bin/ceph: stderr > chmod 0600 ~/cephadm_private_key
/usr/bin/ceph: stderr > ssh -F ssh_config -i ~/cephadm_private_key
root@10.20.23.65
ERROR: Failed to add host <opcpmfpsbpp0101>: Failed command: /bin/docker
run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint
/usr/bin/ceph --init -e
CONTAINER_IMAGE=private-registery.fst/ceph/ceph:v16.2 .7 -e
NODE_NAME=opcpmfpsbpp0101 -e CEPH_USE_RANDOM_NONCE=1 -v
/var/log/ceph/e52bee78-db8b-11ec-9099-00505695f8a8:/var/log/ceph:z -v
/tmp/ceph-tmpwt99ep2e:/etc/ceph/ceph.client.admin.keyring:z -v
/tmp/ceph-tmpweojwq dh:/etc/ceph/ceph.conf:z
private-registery.fst/ceph/ceph:v16.2.7 orch host add opcpmfpsbpp0101
10.20.23.65
why ?
How can I solve it?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx