Yes I saw that thanks. Unfortunately, that doesnt show use of "custom classes" as someone hinted at. ----- Original Message ----- From: DHilsbos@xxxxxxxxxxxxxx To: "ceph-users" <ceph-users@xxxxxxxxxxxxxx> Cc: "Philip Brown" <pbrown@xxxxxxxxxx> Sent: Monday, December 16, 2019 3:38:49 PM Subject: RE: Separate disk sets for high IO? Philip; There's isn't any documentation that shows specifically how to do that, though the below comes close. Here's the documentation, for Nautilus, on CRUSH operations: https://docs.ceph.com/docs/nautilus/rados/operations/crush-map/ About a third of the way down the page is a discussion of "Device Classes." In that sections it talks about creating CRUSH rules that target certain device classes (hdd, ssd, nvme, by default). Once you have a rule, you can configure a pool to use the rule. Thank you, Dominic L. Hilsbos, MBA Director - Information Technology Perform Air International Inc. DHilsbos@xxxxxxxxxxxxxx www.PerformAir.com -----Original Message----- From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Philip Brown Sent: Monday, December 16, 2019 3:43 PM To: Nathan Fish Cc: ceph-users Subject: Re: Separate disk sets for high IO? Sounds very useful. Any online example documentation for this? havent found any so far? ----- Original Message ----- From: "Nathan Fish" <lordcirth@xxxxxxxxx> To: "Marc Roos" <M.Roos@xxxxxxxxxxxxxxxxx> Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>, "Philip Brown" <pbrown@xxxxxxxxxx> Sent: Monday, December 16, 2019 2:07:44 PM Subject: Re: Separate disk sets for high IO? Indeed, you can set device class to pretty much arbitrary strings and specify them. By default, 'hdd', 'ssd', and I think 'nvme' are autodetected - though my Optanes showed up as 'ssd'. On Mon, Dec 16, 2019 at 4:58 PM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote: > > > > You can classify osd's, eg as ssd. And you can assign this class to a > pool you create. This way you have have rbd's running on only ssd's. I > think you have also a class for nvme and you can create custom classes. > > > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com