НА: Ceph cache-pool overflow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

Because distribution is computed with CRUSH rules algoritmically. So, as with any other
hash algorithms, the result will depend of the 'data' itself. In ceph, 'data' - is the object name.

Imagine, that you have a simple plain hashtable with 17 buckets. Bucket index is computed
by a simple 'modulo 17' algorithm. When you try to insert in the table values 17,34,51, etc,
they will be stored in one bucket only, leaving all others empty. The same thing happens
with ceph, except that the 'hash function' (CRUSH map) are heavily parametrized by
osd tree topology, weights, etc.

Also you can set number of placement groups per pool - pg num and pgp num. The higher
their values, the more uniform will be distribution. But it is also rise the resources needed
for monitors.


Megov Igor
CIO, Yuterra

________________________________________
От: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> от имени Квапил, Андрей <kvaps@xxxxxxxxxxx>
Отправлено: 7 сентября 2015 г. 14:44
Кому: ceph-users@xxxxxxxxxxxxxx
Тема: Re:  Ceph cache-pool overflow

Hello. Somebody can answer a simple question?

Why ceph, with the equal weights and sizes of OSD, writes to them not
equally? - to some bit more, to other bit less...

Thanks.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux