Hi,
1. Is there a formula to calculate the optimal size of partitions on
the SSD for each OSD, given their capacity and IO performance? Or is
there a rule of thumb on this?
Wido and probably some other users already mentioned 10 GB per 1 TB
OSD (1/100th of the OSD). Regarding the WAL size, this is what SUSE
(Enterprise Storage based on Ceph) recommends:
"Between 500MB and 2GB for the WAL device. The WAL size depends on the
data traffic and workload, not on the OSD size. If you know that an
OSD is physically able to handle small writes and overwrites at a very
high throughput, more WAL is preferred rather than less WAL. 1GB WAL
device is a good compromise that fulfills most deployments."
2. Is there a formula to find out the max number of OSDs a single SSD
can serve for journaling? Or any rule of thumb?
SUSE recommends not more than 6 journals on the same SSD or 12 if
you're using NVMe disks, otherwise the performance of the SSD will
degrade.
3. What is the procedure to replace an SSD journal device used for
DB+WAL in a hot cluster?
We posted a procedure in our blog [1] how to replace failed SSDs using
for RocksDB and WAL (in our case both on the same disk).
I hope this answers your questions sufficiently.
Regards,
Eugen
[1]
http://heiterbiswolkig.blogs.nde.ag/2018/04/08/migrating-bluestores-block-db/
Zitat von Cody <codeology.lab@xxxxxxxxx>:
Hi everyone,
As a newbie, I have some questions about using SSD as the Bluestore
journal device.
1. Is there a formula to calculate the optimal size of partitions on
the SSD for each OSD, given their capacity and IO performance? Or is
there a rule of thumb on this?
2. Is there a formula to find out the max number of OSDs a single SSD
can serve for journaling? Or any rule of thumb?
3. What is the procedure to replace an SSD journal device used for
DB+WAL in a hot cluster?
Thank you all very much!
Cody
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com