Re: Nautilus, k+m erasure coding a profile vs size+min_size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The simple answer is because k+1 is the default min_size for EC pools. min_size means that the pool will still accept writes if that many failure domains are still available. If you set min_size to k then you have entered the dangerous territory that if you loose another failure domain (OSD or host) while the pool is recovering you will potentially lose data. Same as why min_size=1 is a bad idea for replicated pools (which has been extensively discussed on this list)

On Tue, 21 May 2019 at 12:52, Yoann Moulin <yoann.moulin@xxxxxxx> wrote:
Dear all,

I am doing some tests with Nautilus and cephfs on erasure coding pool.

I noticed something strange between k+m in my erasure profile and size+min_size in the pool created:

> test@icadmin004:~$ ceph osd erasure-code-profile get ecpool-4-2
> crush-device-class=
> crush-failure-domain=osd
> crush-root=default
> jerasure-per-chunk-alignment=false
> k=4
> m=2
> plugin=jerasure
> technique=reed_sol_van
> w=8

> test@icadmin004:~$ ceph --cluster test osd pool create cephfs_data 8 8 erasure ecpool-4-2
> pool 'cephfs_data' created

> test@icadmin004:~$ ceph osd pool ls detail | grep cephfs_data
> pool 14 'cephfs_data' erasure size 6 min_size 5 crush_rule 1 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 2646 flags hashpspool stripe_width 16384

Why min_size = 5 and not 4 ?

Best,

--
Yoann Moulin
EPFL IC-IT
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux