Indeed, you can set device class to pretty much arbitrary strings and specify them. By default, 'hdd', 'ssd', and I think 'nvme' are autodetected - though my Optanes showed up as 'ssd'. On Mon, Dec 16, 2019 at 4:58 PM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote: > > > > You can classify osd's, eg as ssd. And you can assign this class to a > pool you create. This way you have have rbd's running on only ssd's. I > think you have also a class for nvme and you can create custom classes. > > > > > -----Original Message----- > From: Philip Brown [mailto:pbrown@xxxxxxxxxx] > Sent: 16 December 2019 22:55 > To: ceph-users > Subject: Separate disk sets for high IO? > > Still relatively new to ceph, but have been tinkering for a few weeks > now. > > If I'm reading the various docs correctly, then any RBD in a particular > ceph cluster, will be distributed across ALL OSDs, ALL the time. > There is no way to designate a particular set of disks, AKA OSDs, to be > a high performance group, and allocate certain RBDs to only use that set > of disks. > Pools, only control things like the replication count, and number of > placement groups. > > I'd have to set up a whole new ceph cluster for the type of behavior I > want. > > Am I correct? > > > > -- > Philip Brown| Sr. Linux System Administrator | Medata, Inc. > 5 Peters Canyon Rd Suite 250 > Irvine CA 92606 > Office 714.918.1310| Fax 714.918.1325 > pbrown@xxxxxxxxxx| www.medata.com > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com