Re: Set existing pools to use hdd device class only

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The correct URL should be:

http://lists.ceph.com/pipermail/ceph-large-ceph.com/2018-April/000106.html


Zitat von Jonathan Proulx <jon@xxxxxxxxxxxxx>:

On Mon, Aug 20, 2018 at 06:13:26AM -0400, David Turner wrote:

:There is a thread from the ceph-large users ML that covered a way to do
:this change without shifting data for an HDD only cluster.  Hopefully it
:will be helpful for you.
:
:
:httpists.ceph.com/pipermail/ceph-large-ceph.com/2018-April/000106.html

Wish I knew that list existed before I thrashed my entire cloud for a
week doing exactly this. 95% data move on a cluster that already
experiencing IO exhaustion was not fun. Luckily it was "only" 0.5PB

-Jon

:On Mon, Aug 20, 2018, 5:42 AM Enrico Kern <enrico.kern@xxxxxxxxxx> wrote:
:
:> Hmm then is not really an option for me. Maybe someone from the devs can
:> shed a light why it is doing migration as long you only have OSDs with the
:> same class? I have a few Petabyte of Storage in each cluster. When it
:> starts migrating everything again that will result in a super big
:> performance bottleneck. My plan was to set the existing pools to use the
:> new crush rule hdd only and add ssd osds later.
:>
:> On Mon, Aug 20, 2018 at 11:22 AM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx>
:> wrote:
:>
:>>
:>> I just recently did the same. Take into account that everything starts
:>> migrating. How weird it maybe, I had hdd test cluster only and changed
:>> the crush rule to having hdd. Took a few days, totally unnecessary as
:>> far as I am concerned.
:>>
:>>
:>>
:>>
:>> -----Original Message-----
:>> From: Enrico Kern [mailto:enrico.kern@xxxxxxxxxx]
:>> Sent: maandag 20 augustus 2018 11:18
:>> To: ceph-users@xxxxxxxxxxxxxx
:>> Subject:  Set existing pools to use hdd device class only
:>>
:>> Hello,
:>>
:>> right now we have multiple HDD only clusters with ether filestore
:>> journals on SSDs or on newer installations WAL etc. on SSD.
:>>
:>> I plan to extend our ceph clusters with SSDs to provide ssd only pools.
:>> In luminous we have devices classes so that i should be able todo this
:>> without editing around crush map.
:>>
:>> In the device class doc it says i can create "new" pools to use only
:>> ssds as example:
:>>
:>> ceph osd crush rule create-replicated fast default host ssd
:>>
:>> what happens if i fire this on an existing pool but with hdd device
:>> class? I wasnt able to test thi yet in our staging cluster and wanted to
:>> ask whats the way todo this.
:>>
:>> I want to set an existing pool called volumes to only use osds with hdd
:>> class. Right now all OSDs have hdds. So in theory it should not use
:>> newly created SSD osds once i set them all up for hdd classes right?
:>>
:>> So for existing pool running:
:>>
:>> ceph osd crush rule create-replicated volume-hdd-only volumes default
:>> hdd ceph osd pool set volumes crush_rule volume-hdd-only
:>>
:>>
:>> should be the way to go right?
:>>
:>>
:>> Regards,
:>>
:>> Enrico
:>>
:>> --
:>>
:>>
:>> Enrico Kern
:>>
:>> VP IT Operations
:>>
:>>
:>> enrico.kern@xxxxxxxxxx
:>> +49 (0) 30 555713017 / +49 (0) 152 26814501
:>>
:>> skype: flyersa
:>> LinkedIn Profile <https://www.linkedin.com/in/enricokern>
:>>
:>>
:>> <http://goog_59398030/>  <https://www.glispa.com/>
:>>
:>>
:>> Glispa GmbH | Berlin Office
:>>
:>> Stromstr. 11-17  <https://goo.gl/maps/6mwNA77gXLP2>
:>> Berlin, Germany, 10551  <https://goo.gl/maps/6mwNA77gXLP2>
:>>
:>> Managing Director Din Karol-Gavish
:>> Registered in Berlin
:>> AG Charlottenburg |
:>> <
:>> https://maps.google.com/?q=Sonnenburger+Stra%C3%9Fe+73+10437+Berlin%C2%A0%7C+%3Chttps://maps.google.com/?q%3DSonnenburgerstra%25C3%259Fe%2B73%2B10437%2BBerlin%25C2%25A0%257C%25C2%25A0Germany%26entry%3Dgmail%26source%3Dg%3E%C2%A0Germany&entry=gmail&source=g>
:>>
:>> HRB 114678B –––––––––––––––––––––––––––––
:>>
:>>
:>>
:>
:> --
:>
:> *Enrico Kern*
:> VP IT Operations
:>
:> enrico.kern@xxxxxxxxxx
:> +49 (0) 30 555713017 / +49 (0) 152 26814501
:> skype: flyersa
:> LinkedIn Profile <https://www.linkedin.com/in/enricokern>
:>
:>
:> <http://goog_59398030/> <https://www.glispa.com/>
:>
:> *Glispa GmbH* | Berlin Office
:> Stromstr. 11-17  <https://goo.gl/maps/6mwNA77gXLP2>
:> Berlin, Germany, 10551  <https://goo.gl/maps/6mwNA77gXLP2>
:>
:> Managing Director Din Karol-Gavish
:> Registered in Berlin
:> AG Charlottenburg |
:> <https://maps.google.com/?q=Sonnenburger+Stra%C3%9Fe+73+10437+Berlin%C2%A0%7C+%3Chttps://maps.google.com/?q%3DSonnenburgerstra%25C3%259Fe%2B73%2B10437%2BBerlin%25C2%25A0%257C%25C2%25A0Germany%26entry%3Dgmail%26source%3Dg%3E%C2%A0Germany&entry=gmail&source=g> HRB
:> 114678B
:> –––––––––––––––––––––––––––––
:> _______________________________________________
:> ceph-users mailing list
:> ceph-users@xxxxxxxxxxxxxx
:> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
:>

:_______________________________________________
:ceph-users mailing list
:ceph-users@xxxxxxxxxxxxxx
:http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux