Re: Custom CRUSH maps HOWTO?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What kind of pool are you using, or do you have different pools for
different purposes... Do you have cephfs or rbd only pools etc... describe
your setup.
It is generally best practice to create new rules and apply them to pools
and not to modify existing pools, but that is possible as well. Below is
one relatively simple thing to do but it is just a proposal and it may not
fit your needs so take it with CAUTION!!!

If i did math right you have roughly 51TB SAS and 61TB NVMe, easiest thing
to do is what you can do even from webgui create new crush map for
replicated or EC pool depending which one you're currently using, set
failure domain to HOST, and set device class to NVMe, than repeat the
process for HDD only pool.

After that you can apply new crush configuration to the existing pool,
doing so will cause a lot of data movement which may be short or long
depending on your network and hard drive speeds, also depending on your
client needs if the cluster is usually under heavy load then clients will
definitely notice this action.

So doing it that way you would have two sets of disks to be used for
different purposes one for fast storage and one for slow storage.

Anyway doing any action of this sort I'd test it in at least VM environment
if you dont have some test cluster to run it on previously.

However if your need is to have large chunky pool there are certain
configurations to tell to cluster to place 1 or two replicas on fast drives
and remaining replicas on other device type, but don't take this for
granted i'm not 100% sure, as far as i know Ceph waits for confirmation of
all drives to finish writing process to acknowledge to client that
file/object is stored, so i'm not sure that you would benefit from setup
like that.




Kind regards,
Nino


On Tue, May 30, 2023 at 4:53 PM Thorne Lawler <thorne@xxxxxxxxxxx> wrote:

> Hi folks!
>
> I have a Ceph production 17.2.6 cluster with 6 machines in it - four
> newer, faster machines with 4x3.84TB NVME drives each, and two with
> 24x1.68TB SAS disks each.
>
> I know I should have done something smart with the CRUSH maps for this
> up front, but until now I have shied away from CRUSH maps as they sound
> really complex.
>
> Right now my cluster's performance, especially write performance, is not
> what it needs to be, and I am looking for advice:
>
> 1. How should I be structuring my crush map, and why?
>
> 2. How does one actually edit and manage a CRUSH map? What /commands/
> does one use? This isn't clear at all in the documentation. Are there
> any GUI tools out there for managing CRUSH?
>
> 3. Is this going to impact production performance or availability while
> I'm configuring it? I have tens of thousands of users relying on this
> thing, so I can't take any risks.
>
> Thanks in advance!
>
> --
>
> Regards,
>
> Thorne Lawler - Senior System Administrator
> *DDNS* | ABN 76 088 607 265
> First registrar certified ISO 27001-2013 Data Security Standard ITGOV40172
> P +61 499 449 170
>
> _DDNS
>
> /_*Please note:* The information contained in this email message and any
> attached files may be confidential information, and may also be the
> subject of legal professional privilege. _If you are not the intended
> recipient any use, disclosure or copying of this email is unauthorised.
> _If you received this email in error, please notify Discount Domain Name
> Services Pty Ltd on 03 9815 6868 to report this matter and delete all
> copies of this transmission together with any attachments. /
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux