Luminous Bluestore performance, bcache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Everyone,
There's been a few threads go past around this but I haven't seen any
that pointed me in the right direction.
We've recently set up a new luminous (12.2.5) cluster with 5 hosts
each with 12 4TB Seagate Constellation ES spinning disks for osd's. We
also have 2x 400GB Intel DC P3700's per node. We're using this for rbd
storage for VM's running under Proxmox VE.
I firstly set these up with DB partition (approx 60GB per osd) on nvme
and data directly onto the spinning disk using ceph-deploy create.
This worked great and was very simple.
However performance wasn't great. I fired up 20vm's each running fio
trying to attain 50 iops. Ceph was only just able to keep up with the
1000iops this generated and vm's started to have trouble hitting their
50iops target.
So I rebuilt all the osd's halving the DB space (~30GB per osd) and
adding a 200GB BCache partition shared between 6 osd's. Again this
worked great with ceph-deploy create and was very simple.
I have had a vast improvement with my synthetic test. I can now run
100 50iops test vm's generating a constant 5000iops load and each one
can keep up without any trouble.

The question I have is if the poor performance out of the box is
expected? Or is there some kind of tweaking I should be doing to make
this usable for rbd images? Are others able to work ok with this kind
of config at a small scale like my 60osd's? Or is it only workable at
a larger scale?

Regards,
Rich
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux