I’m trying to setup a single-node ceph “cluster”. For my test that
would be sufficient. Of course it could be that ceph orch isn’t
meant to be used on a single node only.
So maybe it’s worth to try out three nodes…
root@terraformdemo:~# ceph orch ls
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
alertmanager ?:9093,9094 0/1 - 7d count:1
crash 0/0 - 7d *
grafana ?:3000 0/1 - 7d count:1
mgr 0/2 - 7d count:2
mon 0/5 - 7d count:5
node-exporter ?:9100 0/0 - 7d *
osd.all-available-devices 0/0 - 4d *
prometheus ?:9095 0/1 - 7d count:1
root@terraformdemo:~# ceph orch ps
No daemons reported
I’m not sure what this output means. No daemons running at all?
According to ceph status at least I have a running mon and a running
mgr – and I can access the ceph dashboard!
Again I have the impression that ceph orch is not in sync with the
underlying layer…
From: Yury Kirsanov <y.kirsanov@xxxxxxxxx>
Sent: Dienstag, 9. November 2021 14:46
To: Scharfenberg, Carsten <c.scharfenberg@xxxxxxxxxxxxx>
Cc: Сергей Процун <prosergey07@xxxxxxxxx>; Zach Heise
<heise@xxxxxxxxxxxx>; ceph-users <ceph-users@xxxxxxx>
Subject: Re: Re: fresh pacific installation does not
detect available disks
By the way, what does:
ceph orch ls
and
ceph orch ps
show?
On Wed, Nov 10, 2021 at 12:45 AM Yury Kirsanov
<y.kirsanov@xxxxxxxxx<mailto:y.kirsanov@xxxxxxxxx>> wrote:
Hi,
Sorry, but I've never encountered such behaviour, I was first
testing my CEPH 16.2.6 in a VMWare environment and it was all
working fine there, I was able to add hosts via ceph orch and then
add OSDs as usual. If I had to restart the deployment process for
some reason - I was just wiping HDDs using dd command and it was
picking up these HDDs just fine... Not sure how to help you here,
but one thing is confusing me a bit - why do you have just one
monitor? Maybe this somehow affects ceph orch? Of course this is
highly unlikely but still...If you have nodes that can host OSDs you
can deploy MONs to them as well, just use:
'ceph orch apply mon --placement="3 <host1> <host2> <host3>"
Regards,
Yury.
On Wed, Nov 10, 2021 at 12:13 AM Scharfenberg, Carsten
<c.scharfenberg@xxxxxxxxxxxxx<mailto:c.scharfenberg@xxxxxxxxxxxxx>>
wrote:
Thanks Yury,
ceph-volume always listed these devices as available. But ceph orch
does not. They do not seem to exist for ceph orch.
Also adding them manually does not help (I’ve tried that before and
now again):
root@terraformdemo:~# ceph orch daemon add osd 192.168.72.10:/dev/sdc
root@terraformdemo:~# ceph orch daemon add osd 192.168.72.10:/dev/sde
root@terraformdemo:~# ceph status
cluster:
id: 655a7a32-3bbf-11ec-920e-000c29da2e6a
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 1
services:
mon: 1 daemons, quorum terraformdemo (age 4m)
mgr: terraformdemo.aylzbb(active, since 4m)
osd: 0 osds: 0 up, 0 in (since 6d)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
before I did that I rebooted the VM. I’ve also tried with the
hostname instead of the IP – no difference…
Also it’s quite irritating that there are no error messages…
--
Carsten
From: Yury Kirsanov <y.kirsanov@xxxxxxxxx<mailto:y.kirsanov@xxxxxxxxx>>
Sent: Dienstag, 9. November 2021 14:05
To: Scharfenberg, Carsten
<c.scharfenberg@xxxxxxxxxxxxx<mailto:c.scharfenberg@xxxxxxxxxxxxx>>
Cc: Сергей Процун
<prosergey07@xxxxxxxxx<mailto:prosergey07@xxxxxxxxx>>; Zach Heise
<heise@xxxxxxxxxxxx<mailto:heise@xxxxxxxxxxxx>>; ceph-users
<ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>>
Subject: Re: Re: fresh pacific installation does not
detect available disks
Try to do:
ceph orch daemon add osd *<host>*:/dev/sdc
And then
ceph orch daemon add osd *<host>*:/dev/sde
This should succeed as sdc and sde are both marked as available at
the moment. Hope this helps!
Regards,
Yury.
On Wed, Nov 10, 2021 at 12:01 AM Yury Kirsanov
<y.kirsanov@xxxxxxxxx<mailto:y.kirsanov@xxxxxxxxx>> wrote:
By the way, /dev/sdc is now listed as available:
Device Path Size rotates available Model name
/dev/sdc 20.00 GB True True VMware Virtual S
/dev/sde 20.00 GB True True VMware Virtual S
On Tue, Nov 9, 2021 at 11:23 PM Scharfenberg, Carsten
<c.scharfenberg@xxxxxxxxxxxxx<mailto:c.scharfenberg@xxxxxxxxxxxxx>>
wrote:
Thanks for your support, guys.
Unfortunately I do not know the tool sgdisk. It’s also not available
from the standard Debian package repository.
So I’ve tried out Yury’s approach to use dd… without success:
root@terraformdemo:~# dd if=/dev/zero of=/dev/sdc bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.05943 s, 1.0 GB/s
root@terraformdemo:~# dd if=/dev/zero of=/dev/sde bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.618606 s, 1.7 GB/s
root@terraformdemo:~# ceph-volume inventory
Device Path Size rotates available Model name
/dev/sdc 20.00 GB True True VMware Virtual S
/dev/sde 20.00 GB True True VMware Virtual S
/dev/sda 20.00 GB True False VMware Virtual S
/dev/sdb 20.00 GB True False VMware Virtual S
/dev/sdd 20.00 GB True False VMware Virtual S
root@terraformdemo:~# ceph orch device ls
root@terraformdemo:~#
Do you have any other ideas? Could it be that ceph is not usable
with this kind of virtual harddisk?
--
Carsten
From: Сергей Процун <prosergey07@xxxxxxxxx<mailto:prosergey07@xxxxxxxxx>>
Sent: Donnerstag, 4. November 2021 21:36
To: Yury Kirsanov <y.kirsanov@xxxxxxxxx<mailto:y.kirsanov@xxxxxxxxx>>
Cc: Zach Heise <heise@xxxxxxxxxxxx<mailto:heise@xxxxxxxxxxxx>>;
Scharfenberg, Carsten
<c.scharfenberg@xxxxxxxxxxxxx<mailto:c.scharfenberg@xxxxxxxxxxxxx>>;
ceph-users <ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>>
Subject: Re: Re: fresh pacific installation does not
detect available disks
ACHTUNG: Diese Mail kommt von einem EXTERNEN ABSENDER - bitte
VORSICHT beim Öffnen von ANHÄNGEN und im Umgang mit LINKS.
Hello,
I agree with that point. When ceph creates lvm volumes it adds lvm
tags to them. Thats how ceph finds that those they are occupied by
ceph. So you should remove lvm volumes and even better clean all
data on those lvm volumes. Usually its enough to clean just the head
of lvm partition where it stores information of the volumes itself.
---
Sergey Protsun
чт, 4 лист. 2021, 22:29 користувач Yury Kirsanov
<y.kirsanov@xxxxxxxxx<mailto:y.kirsanov@xxxxxxxxx>> пише:
Hi,
You should erase any partitions or LVM groups on the disks and restart OSD
hosts so CEPH would be able to detect drives. I usually just do 'dd
if=/dev/zero of=/dev/<sd*> bs=1M count=1024' and then reboot host to make
sure it will definitely be clean. Or, alternatively, you can zap the
drives, or you can just remove LVM groups using pvremove or remove
patitions using fdisk.
Regards,
Yury.
On Fri, 5 Nov 2021, 07:24 Zach Heise,
<heise@xxxxxxxxxxxx<mailto:heise@xxxxxxxxxxxx>> wrote:
Hi Carsten,
When I had problems on my physical hosts (recycled systems that we wanted
to
just use in a test cluster) I found that I needed to use sgdisk --zap-all
/dev/sd{letter} to clean all partition maps off the disks before ceph would
recognize them as available. Worth a shot in your case, even though as
fresh
virtual volumes they shouldn't have anything on them (yet) anyway.
-----Original Message-----
From: Scharfenberg, Carsten
<c.scharfenberg@xxxxxxxxxxxxx<mailto:c.scharfenberg@xxxxxxxxxxxxx>>
Sent: Thursday, November 4, 2021 12:59 PM
To: ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
Subject: fresh pacific installation does not detect available
disks
Hello everybody,
as ceph newbie I've tried out setting up ceph pacific according to the
official documentation: https://docs.ceph.com/en/latest/cephadm/install/
The intention was to setup a single node "cluster" with radosgw to feature
local S3 storage.
This failed because my ceph "cluster" would not detect OSDs.
I started from a Debain 11.1 (bullseye) VM hosted on VMware workstation. Of
course I've added some additional disk images to be used as OSDs.
These are the steps I've performed:
curl --silent --remote-name --location
https://github.com/ceph/ceph/raw/pacific/src/cephadm/cephadm
chmod +x cephadm
./cephadm add-repo --release pacific
./cephadm install
apt install -y cephadm
cephadm bootstrap --mon-ip <my_ip>
cephadm add-repo --release pacific
cephadm install ceph-common
ceph orch apply osd --all-available-devices
The last command would have no effect. Its sole output is:
Scheduled osd.all-available-devices update...
Also ceph -s shows that no OSDs were added:
cluster:
id: 655a7a32-3bbf-11ec-920e-000c29da2e6a
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 1
services:
mon: 1 daemons, quorum terraformdemo (age 2d)
mgr: terraformdemo.aylzbb(active, since 2d)
osd: 0 osds: 0 up, 0 in (since 2d)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
To find out what may be going wrong I've also tried out this:
cephadm install ceph-osd
ceph-volume inventory
This results in a list that makes more sense:
Device Path Size rotates available Model name
/dev/sdc 20.00 GB True True VMware Virtual S
/dev/sde 20.00 GB True True VMware Virtual S
/dev/sda 20.00 GB True False VMware Virtual S
/dev/sdb 20.00 GB True False VMware Virtual S
/dev/sdd 20.00 GB True False VMware Virtual S
So how can I convince cephadm to use the available devices?
Regards,
Carsten
_______________________________________________
ceph-users mailing list --
ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx> To unsubscribe send
an email
to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to
ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to
ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx