Re: Adjusting replicas on argonaut

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What are your CRUSH rules? Depending on how you set this cluster up,
it might not be placing more than one replica in a single host, and
you've only got two hosts so it couldn't satisfy your request for 3
copies.
-Greg

On Tue, Jan 8, 2013 at 2:11 PM, Bryan Stillwell
<bstillwell@xxxxxxxxxxxxxxx> wrote:
> I tried increasing the number of metadata replicas from 2 to 3 on my
> test cluster with the following command:
>
> ceph osd pool set metadata size 3
>
>
> Afterwards it appears that all the metadata placement groups switch to
> a degraded state and doesn't seem to be attempting to recover:
>
> 2013-01-08 14:49:37.352735 mon.0 [INF] pgmap v156393: 1920 pgs: 1280
> active+clean, 640 active+degraded; 903 GB data, 1820 GB used, 2829 GB
> / 4650 GB avail; 1255/486359 degraded (0.258%)
>
>
> Does anything need to be done after increasing the number of replicas?
>
> Here's what the OSD tree looks like:
>
> root@a1:~# ceph osd tree
> dumped osdmap tree epoch 1303
> # id    weight  type name       up/down reweight
> -1      4.99557 pool default
> -3      4.99557         rack unknownrack
> -2      2.49779                 host b1
> 0       0.499557                                osd.0   up      1
> 1       0.499557                                osd.1   up      1
> 2       0.499557                                osd.2   up      1
> 3       0.499557                                osd.3   up      1
> 4       0.499557                                osd.4   up      1
> -4      2.49779                 host b2
> 5       0.499557                                osd.5   up      1
> 6       0.499557                                osd.6   up      1
> 7       0.499557                                osd.7   up      1
> 8       0.499557                                osd.8   up      1
> 9       0.499557                                osd.9   up      1
>
>
> Thanks,
> Bryan
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux