Re: Ceph Bluestore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

your subject line has little relevance to your rather broad questions.

On Tue, 14 Mar 2017 23:45:26 +0100 Michał Chybowski wrote:

> Hi,
> 
> I'm going to set up a small cluster (5 nodes with 3 MONs, 2 - 4 HDDs per 
> node) to test if ceph in such small scale is going to perform good 
> enough to put it into production environment (or does it perform well 
> only if there are tens of OSDs, etc.).
So we are to assume that this is a test cluster (but resembling your
deployment plans) and that you have little to no Ceph experience, right?

Ceph is definitely a scale-out design (more OSDs are better) but that of
course depends on your workload, expectations and actual HW/design. 

For a very harsh look at things, a cluster with 3 nodes and one OSD (HDD)
each will only be as "fast" as a single HDD plus the latencies introduced
by the network, replication and OSD code overhead.
Made even worse (twice as much) by an inline journal.
And you get to spend the 3 times the money for that "pleasure".
So compared to local storage Ceph is going perform in the "mediocre" to
"abysmal" range.

> Are there any "do's" and "don'ts" in matter of OSD storage type 
> (bluestore / xfs / ext4 / btrfs), correct 
> "journal-to-storage-drive-size" ratio and monitor placement in very 
> limited space (dedicated machines just for MONs are not an option).
> 
Lots of answers in the docs and this ML, search for them.

If you're testing for something that won't be in production before the end
of this year, look at Bluestore. 
Which incidentally has no journal (but can benefit from fast storage for
similar reasons, WAL etc) and where people have no to little experience
what ratios and sizes are "good".

Also with looking at Bluestore ante portas, I wouldn't consider BTRFS or
ZFS at this time, too much of a specialty case for the uninitiated. 

Which leaves you with XFS or EXT4 for immediate deployment needs, with
EXT4 being deprecated (for RGW users).
I found EXT4 a better fit for our needs (just RBD) in all the years I
tested and compared it with XFS, but if you want to go down the path of
least resistance and have a large pool of people to share your problems
with, XFS is your only choice at this time.

If your machines are powerful enough, co-sharing MONs is not an issue.

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux