For anyone finding this thread down the road: I wrote to the poster yesterday with the same observation. Browsing the ceph-ansible docs and code, to get them to deploy as they want, one may pre-create LVs and enumerate them as explicit data devices. Their configuration also enables primary affinity, so I suspect they’re trying the [brittle] trick of mixing HDD and SSD OSDs in the same pool, with the SSDs forced primary for reads. > > Hi, > > it appears that the SSDs were used as db devices (/dev/sd[efgh]). According to [1] (I don't use ansible) the simple case is that: > >> [...] most of the decisions on how devices are configured to provision an OSD are made by the Ceph tooling (ceph-volume lvm batch in this case). > > And I assume that this exactly what happened, ceph-volume batch deployed the SSDs as rocksDB, not sure how to prevent ansible from doing that, but there are probably several threads out there that explain it. > > Regards, > Eugen > > [1] https://docs.ceph.com/projects/ceph-ansible/en/latest/osds/scenarios.html _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx