Re: New pool with SSD OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Exactly! I created a replicated-hdd rule and set it to an existing small pool without any changes on OSDs (all HDD) and PGs starts migration... It seems like new rules forces migrations...

El 14/9/20 a las 11:09, André Gemünd escribió:
Same happened to us two weeks ago using nautilus, although we added the rules and storage classes.

----- Am 14. Sep 2020 um 16:02 schrieb Marc Roos M.Roos@xxxxxxxxxxxxxxxxx:

I did the same, 1 or 2 years ago, creating a replicated_ruleset_hdd and
replicated_ruleset_ssd. Eventhough I did not have any ssd's on any of
the nodes at that time, adding this hdd type criteria made pg's migrate.
I thought it was strange that this happens on a hdd only cluster, so I
mentioned it here. I am not sure however if this is still an issue, but
better take this into account.





-----Original Message-----
To: ceph-users@xxxxxxx
Subject:  New pool with SSD OSDs

Hello!

We have a Ceph cluster with 30 HDD 4 TB in 6 hosts, only for RBD.

Now, we're receiving other 6 servers with 6 SSD 2 TB each and we want to
create a separate pool for RBD on SSD, and let unused and backup volumes
stays in HDD.


I have some questions:


As I am only using "replicated_rule". ¿If I add an SSD OSD to the
cluster, Ceph starts to migrate PGs to it?

If so, to prevent this, first I have to create rule like

     # ceph osd crush rule create-replicated pool-hdd default host hdd

and then

     #ceph osd pool set rbd crush_rule pool-hdd

?


Or, if Ceph does not mix automatically hdd and ssd, I create the SSD OSD
and then

     # ceph osd crush rule create-replicated pool-ssd default host ssd

     # ceph osd pool create pool-ssd 256 256 ssdpool

?

And then migrate images from one to another pool as needed.


Any thoughts are wellcome!

Thanks in advanced for your time.


Javier.-

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux