Re: Ceph Storage || Deploy/Install/Bootstrap a Ceph Cluster || Cephadm Orchestrator CLI method

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You already are in the right place wrt the docs. You can check out the help page of cephadm to see which other options you have. The easiest way for you would be to bootstrap without monitoring stack:

cephadm bootstrap --mon-ip 192.168.2.125 --skip-monitoring-stack

This should bring up your cluster successfully. Then expand the cluster by adding the other nodes, deploy OSDs and when everything is set up, deploy the monitoring stack without ceph-exporter:

ceph orch apply prometheus
ceph orch apply grafana
ceph orch apply node-exporter

This is just an example, there are a couple of ways to do this and to control daemon placement, but for starters it should get you there.

Zitat von ankit@xxxxxxxxx:

Hi Eugen,

Thank you very much for looking this insight.

But as I mentioned earlier and I am trying build ceph-cluster first time so could you please help me to build it if you can point any documentation where all details are available so that I can follow it!




Regards,
Ankit Sharma




-----Original Message-----
From: Eugen Block <eblock@xxxxxx>
Sent: Friday, February 9, 2024 2:14 PM
To: ceph-users@xxxxxxx
Subject: Re: Ceph Storage || Deploy/Install/Bootstrap a Ceph Cluster || Cephadm Orchestrator CLI method

Hi,

I don't really know how the ceph-exporter gets into your Quincy bootstrap, when I deploy it with Quincy (also cephadm is from Quincy repo) it doesn't try to deploy ceph-exporter. When I use cephadm from Reef repo it does deploy a Reef cluster including ceph-exporter successfully. As a workaround you should be able to deploy the cluster if you skip the monitoring stack during bootstrap and add it later after you upgraded to Reef. The parameter is --skip-monitoring-stack. Or you deploy directly with Reef with the --image option:

cephadm --image quay.io/ceph/ceph:v18 bootstrap ...

The quincy docs [1] already contain the information about the ceph-exporter:

With the introduction of ceph-exporter daemon, the prometheus module
will no longer export Ceph daemon perf counters as prometheus metrics
by default.

But when trying to apply it in Quincy it fails as well:

[root@quincy-3 ~]# ceph orch apply ceph-exporter Error EINVAL: Usage:
   ceph orch apply -i <yaml spec> [--dry-run]
ceph orch apply <service_type> [--placement=<placement_string>] [--unmanaged]

I'll check if there's an existing tracker issue.

Thanks,
Eugen

[1]
https://docs.ceph.com/en/quincy/mgr/prometheus/#ceph-daemon-performance-counters-metrics

Zitat von ankit@xxxxxxxxx:

Hi Guys,

I am newbie and trying to install Ceph Storage cluster and following
this
https://docs.ceph.com/en/latest/cephadm/install/#cephadm-deploying-new
-cluster

=============================================================
OS - Ubuntu 22.04.3 LTS (Jammy Jellyfish)

4 node Cluster - mon1,mgr1,2 OSD nodes

mon1 node can ssh all nodes via root to sudo ceph-user and ceph-user
to ceph-user on other nodes

basic requirements are done like podman, python3, systemd,ntp, lvm.
===================================================================

cephadm bootstrap --mon-ip 192.168.2.125 - after running this i am
getting following error.

ceph-user@mon1:~$ sudo cephadm bootstrap --mon-ip 192.168.2.125
Creating directory /etc/ceph for ceph.conf Verifying podman|docker is
present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chrony.service is enabled and running Repeating the final host
check...
podman (/usr/bin/podman) version 3.4.4 is present systemctl is present
lvcreate is present Unit chrony.service is enabled and running Host
looks OK Cluster fsid: 90813682-c656-11ee-9ca3-0800274ff361
Verifying IP 192.168.2.125 port 3300 ...
Verifying IP 192.168.2.125 port 6789 ...
Mon IP `192.168.2.125` is in CIDR network `192.168.2.0/24` Mon IP
`192.168.2.125` is in CIDR network `192.168.2.0/24` Internal network
(--cluster-network) has not been provided, OSD replication will
default to the public_network Pulling container image
quay.io/ceph/ceph:v17...
Ceph version: ceph version 17.2.7
(b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable) Extracting
ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network to 192.168.2.0/24 Wrote config to
/etc/ceph/ceph.conf Wrote keyring to
/etc/ceph/ceph.client.admin.keyring
Creating mgr...
Verifying port 9283 ...
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/15)...
mgr not available, waiting (2/15)...
mgr not available, waiting (3/15)...
mgr not available, waiting (4/15)...
mgr not available, waiting (5/15)...
mgr not available, waiting (6/15)...
mgr not available, waiting (7/15)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for mgr epoch 5...
mgr epoch 5 is available
Setting orchestrator backend to cephadm...
Generating ssh key...
Wrote public SSH key to /etc/ceph/ceph.pub Adding key to
root@localhost authorized_keys...
Adding host mon1...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Deploying ceph-exporter service with default placement...
Non-zero exit code 22 from /usr/bin/podman run --rm --ipc=host
--stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e
CONTAINER_IMAGE=quay.io/ceph/ceph:v17 -e NODE_NAME=mon1 -e
CEPH_USE_RANDOM_NONCE=1 -v
/var/log/ceph/90813682-c656-11ee-9ca3-0800274ff361:/var/log/ceph:z
-v /tmp/ceph-tmpnjonhex7:/etc/ceph/ceph.client.admin.keyring:z -v
/tmp/ceph-tmp3gil6lbb:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v17 orch
apply ceph-exporter
/usr/bin/ceph: stderr Error EINVAL: Usage:
/usr/bin/ceph: stderr   ceph orch apply -i <yaml spec> [--dry-run]
/usr/bin/ceph: stderr   ceph orch apply <service_type>
[--placement=<placement_string>] [--unmanaged]
/usr/bin/ceph: stderr
Traceback (most recent call last):
  File "/usr/sbin/cephadm", line 9653, in <module>
    main()
  File "/usr/sbin/cephadm", line 9641, in main
    r = ctx.func(ctx)
  File "/usr/sbin/cephadm", line 2205, in _default_image
    return func(ctx)
  File "/usr/sbin/cephadm", line 5774, in command_bootstrap
    prepare_ssh(ctx, cli, wait_for_mgr_restart)
  File "/usr/sbin/cephadm", line 5275, in prepare_ssh
    cli(['orch', 'apply', t])
  File "/usr/sbin/cephadm", line 5708, in cli
    return CephContainer(
  File "/usr/sbin/cephadm", line 4144, in run
    out, _, _ = call_throws(self.ctx, self.run_cmd(),
  File "/usr/sbin/cephadm", line 1853, in call_throws
    raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/podman run --rm --ipc=host
--stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e
CONTAINER_IMAGE=quay.io/ceph/ceph:v17 -e NODE_NAME=mon1 -e
CEPH_USE_RANDOM_NONCE=1 -v
/var/log/ceph/90813682-c656-11ee-9ca3-0800274ff361:/var/log/ceph:z
-v /tmp/ceph-tmpnjonhex7:/etc/ceph/ceph.client.admin.keyring:z -v
/tmp/ceph-tmp3gil6lbb:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v17 orch
apply ceph-exporter


What i am doing wrong or missing? Please help.

Many Thanks
AS
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux