Re: Undersized fix for small cluster, other than adding a 4th node?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The default failure domain is host and you will need 5 (=k+m) nodes
for this config. If you have 4 nodes you can run k=3,m=1 or k=2,m=2
otherwise you'd have to change failure domain to OSD

On Fri, Nov 10, 2017 at 10:52 AM, Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:
>
> I added an erasure k=3,m=2 coded pool on a 3 node test cluster and am
> getting these errors.
>
>    pg 48.0 is stuck undersized for 23867.000000, current state
> active+undersized+degraded, last acting [9,13,2147483647,7,2147483647]
>     pg 48.1 is stuck undersized for 27479.944212, current state
> active+undersized+degraded, last acting [12,1,2147483647,8,2147483647]
>     pg 48.2 is stuck undersized for 27479.944514, current state
> active+undersized+degraded, last acting [12,1,2147483647,3,2147483647]
>     pg 48.3 is stuck undersized for 27479.943845, current state
> active+undersized+degraded, last acting [11,0,2147483647,2147483647,5]
>     pg 48.4 is stuck undersized for 27479.947473, current state
> active+undersized+degraded, last acting [8,4,2147483647,2147483647,5]
>     pg 48.5 is stuck undersized for 27479.940289, current state
> active+undersized+degraded, last acting [6,5,11,2147483647,2147483647]
>     pg 48.6 is stuck undersized for 27479.947125, current state
> active+undersized+degraded, last acting [5,8,2147483647,1,2147483647]
>     pg 48.7 is stuck undersized for 23866.977708, current state
> active+undersized+degraded, last acting [13,11,2147483647,0,2147483647]
>
> Mentioned here
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-May/009572.html
> is that the problem was resolved by adding an extra node, I already
> changed the min_size to 3. Or should I change to k=2,m=2 but do I still
> then have good saving on storage then? How can you calculate saving
> storage of erasure pool?
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux