Re: Stable and fastest ceph version for RBD cluster.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Özkan,

I've written a couple of articles that might be helpful:

https://ceph.io/en/news/blog/2023/reef-osds-per-nvme/
https://ceph.io/en/news/blog/2023/reef-freeze-rbd-performance/
https://ceph.io/en/news/blog/2023/reef-freeze-rgw-performance/
https://ceph.io/en/news/blog/2024/ceph-a-journey-to-1tibps/

There have been a lot of improvements since Nautilus, but some of the biggest revolve around bluestore cache handling, memory management, better RocksDB tuning, RocksDB column families, allocation metadata, and threading improvements (both OSD and client side). There has also been a significant (though unavoidable) regression due to a fix in RocksDB for a potential data corruption issue.

At Clyso we're still deploying Quincy (with some of the new tunings from Reef and Squid). Reef hasn't gotten a ton of updates so I suspect we'll probably jump straight from Quincy to Squid once it's gotten some a point release or two. Hope that helps!

Mark

On 8/12/24 12:18, Özkan Göksu wrote:
Hello folks!

I built a cluster in 2020 and it has been working great with
Nautilus 14.2.16 for the past 4 years.
I have 1000++ RBD drives for VM's running on Samsung MZ7LH3T8HMLT drives.

Now I want to upgrade the ceph version with a fresh installation and I want
to take your opinion on which version will be the best choice for me. I
want to upgrade it once and I won't touch it again minimum of 2 years.

Does anyone have RBD performance comparison from nautilus, octopus, pacific
and quincy?
I just want to learn the important changes and benefits of this upgrade.

Best regards.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux