Got it: one instance per host is enough.
In my case, I'm not using "ceph orch".
We did it manually, crafting one docker-compose.yml per host.
The question is:
Is it possible to run a "crash instance" per host or the solution oblige
me to adopt the cephadm solution?
Thanks!
[]'s
Arthur
On 15/09/2021 08:30, Eugen Block wrote:
Hi,
ceph-crash services are standalone containers, they are not running
inside other containers:
host1:~ # ceph orch ls
NAME RUNNING REFRESHED AGE PLACEMENT
IMAGE NAME
IMAGE ID
crash 4/4 9m ago 3w * mix d2b64e3c3805
Do you see it in your specs? Can you share this output:
ceph orch ls --export --format yaml
You can add the crash service to a spec file and apply it with 'ceph
orch apply -i crash-service.yml' where the yml file could look like this:
service_type: crash
service_name: crash
placement:
host_pattern: '*'
Zitat von Guilherme Geronimo <guilherme.geronimo@xxxxxxxxx>:
Hey Guys!
I'm running my entire cluster (12hosts/89osds - v15.2.22) on Docker
and everything runs smoothly.
But I'm kind of "blind" here: ceph-crash is not running inside the
containers.
And there's nothing related to "ceph-crash" in the docker logs
either....
Is there a special way to configure it?
Should I create and external volume and run a single instance of it?
Thanks!
Guilherme Geronimo (aKa Arthur)
docker-compose example:
services:
osd.106:
container_name: osd106
image: ceph/daemon:latest-nautilus
command: osd_directory_single
restart: unless-stopped
pid: "host"
network_mode: host
privileged: true
volumes:
- /dev/:/dev/
- ../ceph.conf:/etc/ceph
- ./data/ceph-106/:/var/lib/ceph/osd/ceph-106
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx