Re: Luminous Bluestore performance, bcache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Richard,

It is an interesting test for me too as I am planning to migrate to Bluestore storage and was considering repurposing the ssd disks that we currently use for journals.

I was wondering if you are using the Filestore or the bluestone for the osds?

Also, when you perform your testing, how good is the hit ratio that you have on the bcache?

Are you using a lot of random data for your benchmarks? How large is your test file for each vm?

We have been playing around with a few caching scenarios a few years back (enchanceio and a few more which I can't remember now) and we have seen a very poor hit ratio on the caching system. Was wondering if you see a different picture?

Cheers

----- Original Message -----
> From: "Richard Bade" <hitrich@xxxxxxxxx>
> To: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
> Sent: Thursday, 28 June, 2018 05:42:34
> Subject:  Luminous Bluestore performance, bcache

> Hi Everyone,
> There's been a few threads go past around this but I haven't seen any
> that pointed me in the right direction.
> We've recently set up a new luminous (12.2.5) cluster with 5 hosts
> each with 12 4TB Seagate Constellation ES spinning disks for osd's. We
> also have 2x 400GB Intel DC P3700's per node. We're using this for rbd
> storage for VM's running under Proxmox VE.
> I firstly set these up with DB partition (approx 60GB per osd) on nvme
> and data directly onto the spinning disk using ceph-deploy create.
> This worked great and was very simple.
> However performance wasn't great. I fired up 20vm's each running fio
> trying to attain 50 iops. Ceph was only just able to keep up with the
> 1000iops this generated and vm's started to have trouble hitting their
> 50iops target.
> So I rebuilt all the osd's halving the DB space (~30GB per osd) and
> adding a 200GB BCache partition shared between 6 osd's. Again this
> worked great with ceph-deploy create and was very simple.
> I have had a vast improvement with my synthetic test. I can now run
> 100 50iops test vm's generating a constant 5000iops load and each one
> can keep up without any trouble.
> 
> The question I have is if the poor performance out of the box is
> expected? Or is there some kind of tweaking I should be doing to make
> this usable for rbd images? Are others able to work ok with this kind
> of config at a small scale like my 60osd's? Or is it only workable at
> a larger scale?
> 
> Regards,
> Rich
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux