Re: Separate disk sets for high IO?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Philip;

Ah, ok.  I suspect that isn't documented because the developers don't want average users doing it.

It's also possible that it won't work as expected, as there is discussion on the web of device classes being changed at startup of the OSD daemon.

That said...

"ceph osd crush class create <name>" is the command to create a custom device class, at least in Nautilus 14.2.4.

Theoretically, a custom device class can then be used the same as the built in device classes.

Caveat: I'm a user, not a developer of Ceph.

Thank you,

Dominic L. Hilsbos, MBA 
Director - Information Technology 
Perform Air International Inc.
DHilsbos@xxxxxxxxxxxxxx 
www.PerformAir.com



-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Philip Brown
Sent: Monday, December 16, 2019 4:42 PM
To: ceph-users
Subject: Re:  Separate disk sets for high IO?

Yes I saw that thanks.

Unfortunately, that doesnt show use of "custom classes" as someone hinted at.



----- Original Message -----
From: DHilsbos@xxxxxxxxxxxxxx
To: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Cc: "Philip Brown" <pbrown@xxxxxxxxxx>
Sent: Monday, December 16, 2019 3:38:49 PM
Subject: RE: Separate disk sets for high IO?

Philip;

There's isn't any documentation that shows specifically how to do that, though the below comes close.

Here's the documentation, for Nautilus, on CRUSH operations:
https://docs.ceph.com/docs/nautilus/rados/operations/crush-map/

About a third of the way down the page is a discussion of "Device Classes."  In that sections it talks about creating CRUSH rules that target certain device classes (hdd, ssd, nvme, by default).

Once you have a rule, you can configure a pool to use the rule.

Thank you,

Dominic L. Hilsbos, MBA 
Director - Information Technology 
Perform Air International Inc.
DHilsbos@xxxxxxxxxxxxxx 
www.PerformAir.com


-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Philip Brown
Sent: Monday, December 16, 2019 3:43 PM
To: Nathan Fish
Cc: ceph-users
Subject: Re:  Separate disk sets for high IO?

Sounds very useful.

Any online example documentation for this?
havent found any so far?


----- Original Message -----
From: "Nathan Fish" <lordcirth@xxxxxxxxx>
To: "Marc Roos" <M.Roos@xxxxxxxxxxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>, "Philip Brown" <pbrown@xxxxxxxxxx>
Sent: Monday, December 16, 2019 2:07:44 PM
Subject: Re:  Separate disk sets for high IO?

Indeed, you can set device class to pretty much arbitrary strings and
specify them. By default, 'hdd', 'ssd', and I think 'nvme' are
autodetected - though my Optanes showed up as 'ssd'.

On Mon, Dec 16, 2019 at 4:58 PM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:
>
>
>
> You can classify osd's, eg as ssd. And you can assign this class to a
> pool you create. This way you have have rbd's running on only ssd's. I
> think you have also a class for nvme and you can create custom classes.
>
>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux