Re: bluestore worries

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I'd recommend doing this with just one OSD to contrast and compare,
ideally of course with an additional node (but you're unlikely to have
that).

In my (very specific) use case an older cluster with Jewel and filestore
with collocated journal, a 3 node SSD pool with 5 SSDs each sees 2-3%
utilization with 200 VMs.
60 of the same VMs cause a 6-8% utilization with Nautilus against a 4 node
pool, also with 5 SSDs (WAL/DB collocated) each.

The Ceph nodes are identical other than the SSDs, the newer ones
supposedly having more IOPS and lower latency.
The most likely explanation for this significant difference is obviously
bluestore/rocksdb. 

Regards,

Christian

On Fri, 13 Dec 2019 09:12:55 -0500 Frank R wrote:

> Hi all,
> 
> I am thinking about converting a Filestore cluster to Bluestore.
> 
> The OSD nodes have 16X4TB 7200 SATA OSDs with NVME write journals. The NVME
> drives should be large enough to house ~30G DB/WAL OSDs.
> 
> I am worried that I will see a significant performance hit when the
> deferred writes to the NVME journals are eliminated with Bluestore.
> 
> Has anyone converted a similar setup to Bluestore? If so, what was the
> performance impact.
> 
> thx
> Frank


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Rakuten Mobile Inc.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux