Re: Nautilus, k+m erasure coding a profile vs size+min_size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> I am doing some tests with Nautilus and cephfs on erasure coding pool.
>>
>> I noticed something strange between k+m in my erasure profile and size+min_size in the pool created:
>>
>>> test@icadmin004:~$ ceph osd erasure-code-profile get ecpool-4-2
>>> crush-device-class=
>>> crush-failure-domain=osd
>>> crush-root=default
>>> jerasure-per-chunk-alignment=false
>>> k=4
>>> m=2
>>> plugin=jerasure
>>> technique=reed_sol_van
>>> w=8
>>
>>> test@icadmin004:~$ ceph --cluster test osd pool create cephfs_data 8 8 erasure ecpool-4-2
>>> pool 'cephfs_data' created
>>
>>> test@icadmin004:~$ ceph osd pool ls detail | grep cephfs_data
>>> pool 14 'cephfs_data' erasure size 6 min_size 5 crush_rule 1 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 2646
>>> flags hashpspool stripe_width 16384
>>
>> Why min_size = 5 and not 4 ?
>>
> this question comes up regularly and is been discussed just now:
> 
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-May/034867.html

Oh thanks, I missed that thread, make sense. I agree with some comment that it is a little bit confusing.

Best,

-- 
Yoann Moulin
EPFL IC-IT
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux