Re: HEALTH_ERR, size and min_size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Ml Ml (mliebherr99@xxxxxxxxxxxxxx):
> Hello Stefan,
> 
> The status was "HEALTH_OK" before i ran those commands.

\o/

> root@ceph01:~# ceph osd crush rule dump
> [
>     {
>         "rule_id": 0,
>         "rule_name": "replicated_ruleset",
>         "ruleset": 0,
>         "type": 1,
>         "min_size": 1,
>         "max_size": 10,
>         "steps": [
>             {
>                 "op": "take",
>                 "item": -1,
>                 "item_name": "default"
>             },
>             {
>                 "op": "chooseleaf_firstn",
>                 "num": 0,
>                 "type": "host"


^^ This is the important part ... host as failure domain (not osd), but
that's fine in your case.

Make sure you only remove OSDs within the same failure domain at a time and
your safe.

Gr. Stefan

-- 
| BIT BV  https://www.bit.nl/        Kamer van Koophandel 09090351
| GPG: 0xD14839C6                   +31 318 648 688 / info@xxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux