Re: cephadm bootstraps cluster with bad CRUSH map(?)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Thanks for your help!

On 20/05/2024 18:13, Anthony D'Atri wrote:

You do that with the CRUSH rule, not with osd_crush_chooseleaf_type.  Set that back to the default value of `1`.  This option is marked `dev` for a reason ;)

OK [though not obviously at https://docs.ceph.com/en/reef/rados/configuration/pool-pg-config-ref/#confval-osd_crush_chooseleaf_type ]

but I think you’d also need to revert `osd_crush_chooseleaf_type` too.  Might be better to wipe and redeploy so you know that down the road when you add / replace hardware this behavior doesn’t resurface.

Yep, I'm still at the destroy-and-recreate point here, trying to make sure I can do this repeatably.

Once the cluster was up I used an osd spec file that looked like:
service_type: osd
service_id: rrd_single_NVMe
placement:
  label: "NVMe"
spec:
  data_devices:
    rotational: 1
  db_devices:
    model: "NVMe"
Is it your intent to use spinners for payload data and SSD for metadata?

Yes.

You might want to set `db_slots` accordingly, by default I think it’ll be 1:1 which probably isn’t what you intend.

Is there an easy way to check this? The docs suggested it would work, and vgdisplay on the vg that pvs tells me the nvme device is in shows 24 LVs...

Thanks,

Matthew
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux