Re: cephadm bootstraps cluster with bad CRUSH map(?)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On May 20, 2024, at 12:21 PM, Matthew Vernon <mvernon@xxxxxxxxxxxxx> wrote:
> 
> Hi,
> 
> I'm probably Doing It Wrong here, but. My hosts are in racks, and I wanted ceph to use that information from the get-go, so I tried to achieve this during bootstrap.
> 
> This has left me with a single sad pg:
> [WRN] PG_AVAILABILITY: Reduced data availability: 1 pg inactive
>    pg 1.0 is stuck inactive for 33m, current state unknown, last acting []
> 

.mgr pool perhaps.

> ceph osd tree shows that CRUSH picked up my racks OK, eg.
> -3          45.11993  rack B4
> -2          45.11993      host moss-be1001
> 1    hdd    3.75999          osd.1             up   1.00000  1.00000


Please send the entire first 10 lines or so of `ceph osd tree`

> 
> I passed this config to bootstrap with --config:
> 
> [global]
>  osd_crush_chooseleaf_type = 3

Why did you set that?  3 is an unusual value.  AIUI most of the time the only reason to change this option is if one is setting up a single-node sandbox - and perhaps localpools create a rule using it.  I suspect this is at least part of your problem.

> 
> 
> Once the cluster was up I used an osd spec file that looked like:
> service_type: osd
> service_id: rrd_single_NVMe
> placement:
>  label: "NVMe"
> spec:
>  data_devices:
>    rotational: 1
>  db_devices:
>    model: "NVMe"

Is it your intent to use spinners for payload data and SSD for metadata?

> 
> I could presumably fix this up by editing the crushmap (to put the racks into the default bucket), but what did I do wrong? Was this not a reasonable thing to want to do with cephadm?
> 
> I'm running
> ceph version 18.2.2 (531c0d11a1c5d39fbfe6aa8a521f023abf3bf3e2) reef (stable)
> 
> Thanks,
> 
> Matthew
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux