It may help if you could share how you added those OSDs. This guide works for me. https://docs.ceph.com/en/latest/cephadm/drivegroups/ Tony ________________________________________ From: Philip Brown <pbrown@xxxxxxxxxx> Sent: February 17, 2021 09:30 PM To: ceph-users Subject: ceph orch and mixed SSD/rotating disks I'm coming back to trying mixed SSD+spinning disks after maybe a year. It was my vague recollection, that if you told ceph "go auto configure all the disks", it would actually automatically carve up the SSDs into the appropriate number of LVM segments, and use them as WAL devices for each hdd based OSD on the system. Was I wrong? Because when I tried to bring up a brand new cluster (Octopus, cephadm bootstrapped), with multiple nodes and multiple disks per node... it seemed to bring up the SSDS as just another set of OSDs. it clearly recognized them as ssd. The output of "ceph orch device ls" showed them as ssd vs hdd for the others. It just...didnt use them as I expected. ? Maybe I was thinking of ceph ansible. Is there not a nice way to do this with the new cephadm based "ceph orch"? I would rather not have to go write json files or whatever by hand, when a computer should be perfectly capable of auto generating this stuff itself -- Philip Brown| Sr. Linux System Administrator | Medata, Inc. 5 Peters Canyon Rd Suite 250 Irvine CA 92606 Office 714.918.1310| Fax 714.918.1325 pbrown@xxxxxxxxxx| www.medata.com _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx