Re: fresh pacific installation does not detect available disks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Carsten,

When I had problems on my physical hosts (recycled systems that we wanted to
just use in a test cluster) I found that I needed to use sgdisk --zap-all
/dev/sd{letter} to clean all partition maps off the disks before ceph would
recognize them as available. Worth a shot in your case, even though as fresh
virtual volumes they shouldn't have anything on them (yet) anyway.

-----Original Message-----
From: Scharfenberg, Carsten <c.scharfenberg@xxxxxxxxxxxxx> 
Sent: Thursday, November 4, 2021 12:59 PM
To: ceph-users@xxxxxxx
Subject:  fresh pacific installation does not detect available
disks

Hello everybody,

as ceph newbie I've tried out setting up ceph pacific according to the
official documentation: https://docs.ceph.com/en/latest/cephadm/install/
The intention was to setup a single node "cluster" with radosgw to feature
local S3 storage.
This failed because my ceph "cluster" would not detect OSDs.
I started from a Debain 11.1 (bullseye) VM hosted on VMware workstation. Of
course I've added some additional disk images to be used as OSDs.
These are the steps I've performed:

curl --silent --remote-name --location
https://github.com/ceph/ceph/raw/pacific/src/cephadm/cephadm
chmod +x cephadm
./cephadm add-repo --release pacific
./cephadm install

apt install -y cephadm

cephadm bootstrap --mon-ip <my_ip>

cephadm add-repo --release pacific

cephadm install ceph-common

ceph orch apply osd --all-available-devices


The last command would have no effect. Its sole output is:

        Scheduled osd.all-available-devices update...



Also ceph -s shows that no OSDs were added:

  cluster:

    id:     655a7a32-3bbf-11ec-920e-000c29da2e6a

    health: HEALTH_WARN

            OSD count 0 < osd_pool_default_size 1



  services:

    mon: 1 daemons, quorum terraformdemo (age 2d)

    mgr: terraformdemo.aylzbb(active, since 2d)

    osd: 0 osds: 0 up, 0 in (since 2d)



  data:

    pools:   0 pools, 0 pgs

    objects: 0 objects, 0 B

    usage:   0 B used, 0 B / 0 B avail

    pgs:


To find out what may be going wrong I've also tried out this:

        cephadm install ceph-osd

        ceph-volume inventory
This results in a list that makes more sense:

Device Path               Size         rotates available Model name

/dev/sdc                  20.00 GB     True    True      VMware Virtual S

/dev/sde                  20.00 GB     True    True      VMware Virtual S

/dev/sda                  20.00 GB     True    False     VMware Virtual S

/dev/sdb                  20.00 GB     True    False     VMware Virtual S

/dev/sdd                  20.00 GB     True    False     VMware Virtual S


So how can I convince cephadm to use the available devices?

Regards,
Carsten

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email
to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux