We use custom device classes to split data nvme from metadata nvme drives. If a device has a class set it does not get overwritten at startup.
Once you set the class it works just like it says on the tin. Put this pool on these classes, this other pool on this other class etc.
--
Paul Mezzanini
Sr Systems Administrator / Engineer, Research Computing
Information & Technology Services
Finance & Administration
Rochester Institute of Technology
o:(585) 475-3245 | pfmeec@xxxxxxx
Sent from my phone. Please excuse any brevity or typoos.
CONFIDENTIALITY NOTE: The information transmitted, including attachments, is
intended only for the person(s) or entity to which it is addressed and may
contain confidential and/or privileged material. Any review, retransmission,
dissemination or other use of, or taking of any action in reliance upon this
information by persons or entities other than the intended recipient is
prohibited. If you received this in error, please contact the sender and
destroy any copies of this information.
------------------------
From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of DHilsbos@xxxxxxxxxxxxxx <DHilsbos@xxxxxxxxxxxxxx>
Sent: Monday, December 16, 2019 6:51:46 PM
To: ceph-users@xxxxxxxxxxxxxx <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: Separate disk sets for high IO?
Sent: Monday, December 16, 2019 6:51:46 PM
To: ceph-users@xxxxxxxxxxxxxx <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: Separate disk sets for high IO?
Philip;
Ah, ok. I suspect that isn't documented because the developers don't want average users doing it.
It's also possible that it won't work as expected, as there is discussion on the web of device classes being changed at startup of the OSD daemon.
That said...
"ceph osd crush class create <name>" is the command to create a custom device class, at least in Nautilus 14.2.4.
Theoretically, a custom device class can then be used the same as the built in device classes.
Caveat: I'm a user, not a developer of Ceph.
Thank you,
Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International Inc.
DHilsbos@xxxxxxxxxxxxxx
www.PerformAir.com
-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Philip Brown
Sent: Monday, December 16, 2019 4:42 PM
To: ceph-users
Subject: Re: Separate disk sets for high IO?
Yes I saw that thanks.
Unfortunately, that doesnt show use of "custom classes" as someone hinted at.
----- Original Message -----
From: DHilsbos@xxxxxxxxxxxxxx
To: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Cc: "Philip Brown" <pbrown@xxxxxxxxxx>
Sent: Monday, December 16, 2019 3:38:49 PM
Subject: RE: Separate disk sets for high IO?
Philip;
There's isn't any documentation that shows specifically how to do that, though the below comes close.
Here's the documentation, for Nautilus, on CRUSH operations:
https://docs.ceph.com/docs/nautilus/rados/operations/crush-map/
About a third of the way down the page is a discussion of "Device Classes." In that sections it talks about creating CRUSH rules that target certain device classes (hdd, ssd, nvme, by default).
Once you have a rule, you can configure a pool to use the rule.
Thank you,
Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International Inc.
DHilsbos@xxxxxxxxxxxxxx
www.PerformAir.com
-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Philip Brown
Sent: Monday, December 16, 2019 3:43 PM
To: Nathan Fish
Cc: ceph-users
Subject: Re: Separate disk sets for high IO?
Sounds very useful.
Any online example documentation for this?
havent found any so far?
----- Original Message -----
From: "Nathan Fish" <lordcirth@xxxxxxxxx>
To: "Marc Roos" <M.Roos@xxxxxxxxxxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>, "Philip Brown" <pbrown@xxxxxxxxxx>
Sent: Monday, December 16, 2019 2:07:44 PM
Subject: Re: Separate disk sets for high IO?
Indeed, you can set device class to pretty much arbitrary strings and
specify them. By default, 'hdd', 'ssd', and I think 'nvme' are
autodetected - though my Optanes showed up as 'ssd'.
On Mon, Dec 16, 2019 at 4:58 PM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:
>
>
>
> You can classify osd's, eg as ssd. And you can assign this class to a
> pool you create. This way you have have rbd's running on only ssd's. I
> think you have also a class for nvme and you can create custom classes.
>
>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Ah, ok. I suspect that isn't documented because the developers don't want average users doing it.
It's also possible that it won't work as expected, as there is discussion on the web of device classes being changed at startup of the OSD daemon.
That said...
"ceph osd crush class create <name>" is the command to create a custom device class, at least in Nautilus 14.2.4.
Theoretically, a custom device class can then be used the same as the built in device classes.
Caveat: I'm a user, not a developer of Ceph.
Thank you,
Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International Inc.
DHilsbos@xxxxxxxxxxxxxx
www.PerformAir.com
-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Philip Brown
Sent: Monday, December 16, 2019 4:42 PM
To: ceph-users
Subject: Re: Separate disk sets for high IO?
Yes I saw that thanks.
Unfortunately, that doesnt show use of "custom classes" as someone hinted at.
----- Original Message -----
From: DHilsbos@xxxxxxxxxxxxxx
To: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Cc: "Philip Brown" <pbrown@xxxxxxxxxx>
Sent: Monday, December 16, 2019 3:38:49 PM
Subject: RE: Separate disk sets for high IO?
Philip;
There's isn't any documentation that shows specifically how to do that, though the below comes close.
Here's the documentation, for Nautilus, on CRUSH operations:
https://docs.ceph.com/docs/nautilus/rados/operations/crush-map/
About a third of the way down the page is a discussion of "Device Classes." In that sections it talks about creating CRUSH rules that target certain device classes (hdd, ssd, nvme, by default).
Once you have a rule, you can configure a pool to use the rule.
Thank you,
Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International Inc.
DHilsbos@xxxxxxxxxxxxxx
www.PerformAir.com
-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Philip Brown
Sent: Monday, December 16, 2019 3:43 PM
To: Nathan Fish
Cc: ceph-users
Subject: Re: Separate disk sets for high IO?
Sounds very useful.
Any online example documentation for this?
havent found any so far?
----- Original Message -----
From: "Nathan Fish" <lordcirth@xxxxxxxxx>
To: "Marc Roos" <M.Roos@xxxxxxxxxxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>, "Philip Brown" <pbrown@xxxxxxxxxx>
Sent: Monday, December 16, 2019 2:07:44 PM
Subject: Re: Separate disk sets for high IO?
Indeed, you can set device class to pretty much arbitrary strings and
specify them. By default, 'hdd', 'ssd', and I think 'nvme' are
autodetected - though my Optanes showed up as 'ssd'.
On Mon, Dec 16, 2019 at 4:58 PM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:
>
>
>
> You can classify osd's, eg as ssd. And you can assign this class to a
> pool you create. This way you have have rbd's running on only ssd's. I
> think you have also a class for nvme and you can create custom classes.
>
>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com