Nautilus, k+m erasure coding a profile vs size+min_size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear all,

I am doing some tests with Nautilus and cephfs on erasure coding pool.

I noticed something strange between k+m in my erasure profile and size+min_size in the pool created:

> test@icadmin004:~$ ceph osd erasure-code-profile get ecpool-4-2
> crush-device-class=
> crush-failure-domain=osd
> crush-root=default
> jerasure-per-chunk-alignment=false
> k=4
> m=2
> plugin=jerasure
> technique=reed_sol_van
> w=8

> test@icadmin004:~$ ceph --cluster test osd pool create cephfs_data 8 8 erasure ecpool-4-2
> pool 'cephfs_data' created

> test@icadmin004:~$ ceph osd pool ls detail | grep cephfs_data
> pool 14 'cephfs_data' erasure size 6 min_size 5 crush_rule 1 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 2646 flags hashpspool stripe_width 16384

Why min_size = 5 and not 4 ?

Best,

-- 
Yoann Moulin
EPFL IC-IT
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux