Re: Re: Add ssd's to hdd cluster, crush map class hdd update necessary?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



See this thread:

http://lists.ceph.com/pipermail/ceph-large-ceph.com/2018-April/000106.html
http://lists.ceph.com/pipermail/ceph-large-ceph.com/2018-June/000113.html

(Wido -- should we kill the ceph-large list??)

-- dan



On Wed, Jun 13, 2018 at 12:27 PM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:
>
>
> Shit, I added this class and now everything start backfilling (10%) How
> is this possible, I only have hdd's?
>
>
> -----Original Message-----
> From: Konstantin Shalygin [mailto:k0ste@xxxxxxxx]
> Sent: woensdag 13 juni 2018 9:26
> To: Marc Roos; ceph-users
> Subject: Re:  Add ssd's to hdd cluster, crush
> map class hdd update necessary?
>
> On 06/13/2018 09:01 AM, Marc Roos wrote:
> > Yes but I already have some sort of test cluster with data in it. I
> > don’t think there are commands to modify existing rules that are
> being
> > used by pools. And the default replicated_ruleset doesn’t have a
> class
> > specified. I also have an erasure code rule without any class
> > definition for the file system.
>
> Yes, before migration from multi-root/classless crush to luminous+
> classified crush you need to assign classified rulesets to your pools.
> This is safe to apply on production clusters, on EC pools too.
>
>
>
>
>
>
> k
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux