Re: [EXTERNAL] [Pacific] ceph orch device ls do not returns any HDD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I don’t quite understand why that zap would not work.  But, here’s where I’d start.


  1.  cephadm check-host
     *   Run this on each of your hosts to make sure cephadm, podman and all other prerequisites are installed and recognized
  2.  ceph orch ls
     *   This should show at least a mon, mgr, and osd spec deployed
  3.  ceph orch ls osd –export
     *   This will show the OSD placement service specifications that orchestrator uses to identify devices to deploy as OSDs
  4.  ceph orch host ls
     *   This will list the hosts that have been added to orchestrator’s inventory, and what labels are applied which correlate to the service placement labels
  5.  ceph log last cephadm
     *   This will show you what orchestrator has been trying to do, and how it may be failing

Also, it’s never un-helpful to have a look at “ceph -s” and “ceph health detail”, particularly for any people trying to help you without access to your systems.

Best of luck,
Josh Beaman

From: Patrick Begou <Patrick.Begou@xxxxxxxxxxxxxxxxxxxxxx>
Date: Friday, May 12, 2023 at 10:45 AM
To: ceph-users <ceph-users@xxxxxxx>
Subject: [EXTERNAL]  [Pacific] ceph orch device ls do not returns any HDD
Hi everyone

I'm new to CEPH, just a french 4 days training session with Octopus on
VMs that convince me to build my first cluster.

At this time I have 4 old identical nodes for testing with 3 HDDs each,
2 network interfaces and running Alma Linux8 (el8). I try to replay the
training session but it fails, breaking the web interface because of
some problems with podman 4.2 not compatible with Octopus.

So I try to deploy Pacific with cephadm tool on my first node (mostha1)
(to enable testing also an upgrade later).

    dnf -y install
    https://urldefense.com/v3/__https://download.ceph.com/rpm-16.2.13/el8/noarch/cephadm-16.2.13-0.el8.noarch.rpm__;!!CQl3mcHX2A!H9cwNCJyKXYQ4BbGA3gwHHRitjOS4lBCZT9wlnBZ-8IDue0MvdcPD8Dnv5yQCZw_eA4BNDYaEq1eouKQcQO7HshgdUJ0SJ-EgLfaBGBmCQ$<https://urldefense.com/v3/__https:/download.ceph.com/rpm-16.2.13/el8/noarch/cephadm-16.2.13-0.el8.noarch.rpm__;!!CQl3mcHX2A!H9cwNCJyKXYQ4BbGA3gwHHRitjOS4lBCZT9wlnBZ-8IDue0MvdcPD8Dnv5yQCZw_eA4BNDYaEq1eouKQcQO7HshgdUJ0SJ-EgLfaBGBmCQ$>

    monip=$(getent ahostsv4 mostha1 |head -n 1| awk '{ print $1 }')
    cephadm bootstrap --mon-ip $monip --initial-dashboard-password xxxxx \
                       --initial-dashboard-user admceph \
                       --allow-fqdn-hostname --cluster-network 10.1.0.0/16

This was sucessfull.

But running "*c**eph orch device ls*" do not show any HDD even if I have
/dev/sda (used by the OS), /dev/sdb and /dev/sdc

The web interface shows a row capacity which is an aggregate of the
sizes of the 3 HDDs for the node.

I've also tried to reset /dev/sdb but cephadm do not see it:

    [ceph: root@mostha1 /]# ceph orch device zap
    mostha1.legi.grenoble-inp.fr /dev/sdb --force
    Error EINVAL: Device path '/dev/sdb' not found on host
    'mostha1.legi.grenoble-inp.fr'

On my first attempt with octopus, I was able to list the available HDD
with this command line. Before moving to Pacific, the OS on this node
has been reinstalled from scratch.

Any advices for a CEPH beginner ?

Thanks

Patrick
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux