Re: Add ssd's to hdd cluster, crush map class hdd update necessary?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 

This is actually not to nice, because this remapping is now causing a 
nearfull




-----Original Message-----
From: Dan van der Ster [mailto:dan@xxxxxxxxxxxxxx] 
Sent: woensdag 13 juni 2018 14:02
To: Marc Roos
Cc: ceph-users
Subject: Re:  Add ssd's to hdd cluster, crush map class hdd 
update necessary?

See this thread:

http://lists.ceph.com/pipermail/ceph-large-ceph.com/2018-April/000106.html
http://lists.ceph.com/pipermail/ceph-large-ceph.com/2018-June/000113.html

(Wido -- should we kill the ceph-large list??)


On Wed, Jun 13, 2018 at 1:14 PM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:
>
>
> I wonder if this is not a bug or so. Adding the class hdd, to an all 
> hdd cluster should not have such result that 60% of objects are moved 
> around.
>
>
> pool fs_data.ec21 id 53
>   3866523/6247464 objects misplaced (61.889%)
>   recovery io 93089 kB/s, 22 objects/s
>
>
>
>
>
> -----Original Message-----
> From: Marc Roos
> Sent: woensdag 13 juni 2018 7:14
> To: ceph-users; k0ste
> Subject: Re:  Add ssd's to hdd cluster, crush map class 
> hdd update necessary?
>
> I just added here 'class hdd'
>
> rule fs_data.ec21 {
>         id 4
>         type erasure
>         min_size 3
>         max_size 3
>         step set_chooseleaf_tries 5
>         step set_choose_tries 100
>         step take default class hdd
>         step choose indep 0 type osd
>         step emit
> }
>
>
> -----Original Message-----
> From: Konstantin Shalygin [mailto:k0ste@xxxxxxxx]
> Sent: woensdag 13 juni 2018 12:30
> To: Marc Roos; ceph-users
> Subject: *****SPAM***** Re: *****SPAM***** Re:  Add ssd's 
> to hdd cluster, crush map class hdd update necessary?
>
> On 06/13/2018 12:06 PM, Marc Roos wrote:
> > Shit, I added this class and now everything start backfilling (10%) 
> > How is this possible, I only have hdd's?
>
> This is normal when you change your crush and placement rules.
> Post your output, I will take a look
>
> ceph osd crush tree
> ceph osd crush dump
> ceph osd pool ls detail
>
>
>
>
>
> k
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux