Re: block db sizing and calculation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



One tricky thing is each layer of RocksDB is 100% on SSD or 100% on HDD,  so either you need to tweak the rocksdb configuration , or there will be a huge waste,  e.g  20GB DB partition makes no difference compared to a 3GB one (under default rocksdb configuration)

Janne Johansson <icepic.dz@xxxxxxxxx> 于2020年1月14日周二 下午4:43写道:
(sorry for empty mail just before)
 
i'm plannung to split the block db to a seperate flash device which i
also would like to use as an OSD for erasure coding metadata for rbd
devices.

If i want to use 14x 14TB HDDs per Node
https://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#sizing

recommends a minimum size of 140GB per 14TB HDD.

Is there any recommandation of how many osds a single flash device can
serve? The optane ones can do 2000MB/s write + 500.000 iop/s.


I think many ceph admins are more concerned with having many drives co-using the same DB drive, since if the DB drive fails, it also means all OSDs are lost at the same time.
Optanes and decent NVMEs are probably capable of handling tons of HDDs, so that the bottleneck ends up being somewhere else, but the failure scenarios are a bit scary if the whole host is lost just by that one DB device acting up.

--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux