cephadm bootstraps cluster with bad CRUSH map(?)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm probably Doing It Wrong here, but. My hosts are in racks, and I wanted ceph to use that information from the get-go, so I tried to achieve this during bootstrap.

This has left me with a single sad pg:
[WRN] PG_AVAILABILITY: Reduced data availability: 1 pg inactive
    pg 1.0 is stuck inactive for 33m, current state unknown, last acting []

ceph osd tree shows that CRUSH picked up my racks OK, eg.
-3          45.11993  rack B4
-2          45.11993      host moss-be1001
 1    hdd    3.75999          osd.1             up   1.00000  1.00000

But root seems empty:
-1                 0  root default

and if I decompile the crush map, indeed:
# buckets
root default {
        id -1           # do not change unnecessarily
        id -14 class hdd                # do not change unnecessarily
        # weight 0.00000
        alg straw2
        hash 0  # rjenkins1
}

which does indeed look empty, whereas I have rack entries that contain the relevant hosts.

And the replication rule:
rule replicated_rule {
        id 0
        type replicated
        step take default
        step chooseleaf firstn 0 type rack
        step emit
}

I passed this config to bootstrap with --config:

[global]
  osd_crush_chooseleaf_type = 3

and an initial spec file with host entries like this:

service_type: host
hostname: moss-be1001
addr: 10.64.16.40
location:
  rack: B4
labels:
  - _admin
  - NVMe

Once the cluster was up I used an osd spec file that looked like:
service_type: osd
service_id: rrd_single_NVMe
placement:
  label: "NVMe"
spec:
  data_devices:
    rotational: 1
  db_devices:
    model: "NVMe"

I could presumably fix this up by editing the crushmap (to put the racks into the default bucket), but what did I do wrong? Was this not a reasonable thing to want to do with cephadm?

I'm running
ceph version 18.2.2 (531c0d11a1c5d39fbfe6aa8a521f023abf3bf3e2) reef (stable)

Thanks,

Matthew
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux