Re: Set existing pools to use hdd device class only

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



All of the data moves because all of the crush IDs for the hosts and osds changes when you configure a crush rule to only use SSDs or HDDs. Crush creates shadow hosts and shadow osds in the crush map that only have each class of osd.  So if you had node1 with osd.0 as an hdd and osd.1 as an SSD, then you're crush roles using classes would use the shadow crush items of node1-hdd with an odd with a different id for osd.0-hdd and the same for the SSDs. This replaced the old behavior when you had to actually create a dummy host in a second crush root that you created to do this yourself. And you had to place all of the osds in their respective roots.

There is a thread from the ceph-large users ML that covered a way to do this change without shifting data for an HDD only cluster.  Hopefully it will be helpful for you.


httpists.ceph.com/pipermail/ceph-large-ceph.com/2018-April/000106.html

On Mon, Aug 20, 2018, 5:42 AM Enrico Kern <enrico.kern@xxxxxxxxxx> wrote:
Hmm then is not really an option for me. Maybe someone from the devs can shed a light why it is doing migration as long you only have OSDs with the same class? I have a few Petabyte of Storage in each cluster. When it starts migrating everything again that will result in a super big performance bottleneck. My plan was to set the existing pools to use the new crush rule hdd only and add ssd osds later.

On Mon, Aug 20, 2018 at 11:22 AM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:
 
I just recently did the same. Take into account that everything starts
migrating. How weird it maybe, I had hdd test cluster only and changed
the crush rule to having hdd. Took a few days, totally unnecessary as
far as I am concerned.




-----Original Message-----
From: Enrico Kern [mailto:enrico.kern@xxxxxxxxxx]
Sent: maandag 20 augustus 2018 11:18
To: ceph-users@xxxxxxxxxxxxxx
Subject: Set existing pools to use hdd device class only

Hello,

right now we have multiple HDD only clusters with ether filestore
journals on SSDs or on newer installations WAL etc. on SSD.

I plan to extend our ceph clusters with SSDs to provide ssd only pools.
In luminous we have devices classes so that i should be able todo this
without editing around crush map.

In the device class doc it says i can create "new" pools to use only
ssds as example:

ceph osd crush rule create-replicated fast default host ssd

what happens if i fire this on an existing pool but with hdd device
class? I wasnt able to test thi yet in our staging cluster and wanted to
ask whats the way todo this.

I want to set an existing pool called volumes to only use osds with hdd
class. Right now all OSDs have hdds. So in theory it should not use
newly created SSD osds once i set them all up for hdd classes right?

So for existing pool running:

ceph osd crush rule create-replicated volume-hdd-only volumes default
hdd ceph osd pool set volumes crush_rule volume-hdd-only


should be the way to go right?


Regards,

Enrico

--


Enrico Kern

VP IT Operations


enrico.kern@xxxxxxxxxx
+49 (0) 30 555713017 / +49 (0) 152 26814501

skype: flyersa
LinkedIn Profile <https://www.linkedin.com/in/enricokern>


<http://goog_59398030/>  <https://www.glispa.com/>


Glispa GmbH | Berlin Office

Stromstr. 11-17  <https://goo.gl/maps/6mwNA77gXLP2>
Berlin, Germany, 10551  <https://goo.gl/maps/6mwNA77gXLP2>

Managing Director Din Karol-Gavish
Registered in Berlin
AG Charlottenburg |
<https://maps.google.com/?q=Sonnenburger+Stra%C3%9Fe+73+10437+Berlin%C2%A0%7C+%3Chttps://maps.google.com/?q%3DSonnenburgerstra%25C3%259Fe%2B73%2B10437%2BBerlin%25C2%25A0%257C%25C2%25A0Germany%26entry%3Dgmail%26source%3Dg%3E%C2%A0Germany&entry=gmail&source=g
HRB 114678B –––––––––––––––––––––––––––––




--

Enrico Kern
VP IT Operations

+49 (0) 30 555713017 / +49 (0) 152 26814501
skype: flyersa
LinkedIn Profile




Glispa GmbH | Berlin Office

Managing Director Din Karol-Gavish
Registered in Berlin
AG Charlottenburg | HRB 114678B
–––––––––––––––––––––––––––––
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux