Re: EC K + M Size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den lör 3 nov. 2018 kl 09:10 skrev Ashley Merrick <singapore@xxxxxxxxxxxxxx>:
>
> Hello,
>
> Tried to do some reading online but was unable to find much.
>
> I can imagine a higher K + M size with EC requires more CPU to re-compile the shards into the required object.
>
> But is there any benefit or negative going with a larger K + M, obviously their is the size benefit but technically could it also improve reads due to more OSD's providing a smaller section of the data required to compile the shard?
>
> Is there any gotchas that should be known for example going with a 4+2 vs 10+2
>

If one host goes down in a 10+2 scenarion, then 11 or 12 other
machines need to get involved in order to repair the lost data. This
means that if your cluster has close to 12 hosts, it would mean most
or all the servers get extra work now. I saw some old yahoo post from
long ago that stated that the primary (whose job it is to piece them
together) would only send out 8 requests at any given time, and IF
still true, that would make 6+2 somewhat more efficient. Still, EC is
seldom about performance, but rather to save space while still
allowing 1-2-3 drives to die without losing data by using 1-2-3
checksum pieces.


-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux