Re: Ceph Bluestore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Wed, 15 Mar 2017 09:07:10 +0100 Michał Chybowski wrote:

> > Hello,
> >
> > your subject line has little relevance to your rather broad questions.
> >
> > On Tue, 14 Mar 2017 23:45:26 +0100 Michał Chybowski wrote:
> >  
> >> Hi,
> >>
> >> I'm going to set up a small cluster (5 nodes with 3 MONs, 2 - 4 HDDs per
> >> node) to test if ceph in such small scale is going to perform good
> >> enough to put it into production environment (or does it perform well
> >> only if there are tens of OSDs, etc.).  
> > So we are to assume that this is a test cluster (but resembling your
> > deployment plans) and that you have little to no Ceph experience, right?  
> Exactly. If test cluster will perform "well enough", we'll deploy the 
> same setup for production use.

As others mentioned, small clusters can work, none of mine is large in any
sense of the word.
However the load fits the HW and we're using SSD journals and SSD based
cache tiers in some.

> >
> > Ceph is definitely a scale-out design (more OSDs are better) but that of
> > course depends on your workload, expectations and actual HW/design.
> >
> > For a very harsh look at things, a cluster with 3 nodes and one OSD (HDD)
> > each will only be as "fast" as a single HDD plus the latencies introduced
> > by the network, replication and OSD code overhead.
> > Made even worse (twice as much) by an inline journal.
> > And you get to spend the 3 times the money for that "pleasure".
> > So compared to local storage Ceph is going perform in the "mediocre" to
> > "abysmal" range.  
> Local storage is not even being compared here as it's a SPOF which I'm 
> trying to eliminate. Mainly ceph will be used to provide rbd columes to 
> xenserver VMs with replication factor of 3 (safety first, I know it'll 
> cost a lot more than local storage / 2 replicas), but eventually it 
> might be used to serve also as RGW backend.
>
Then you're stuck with XFS or Bluestore because of RGW.
Xenserver (search the archives) isn't exactly well supported with Ceph.

> >  
> >> Are there any "do's" and "don'ts" in matter of OSD storage type
> >> (bluestore / xfs / ext4 / btrfs), correct
> >> "journal-to-storage-drive-size" ratio and monitor placement in very
> >> limited space (dedicated machines just for MONs are not an option).
> >>  
> > Lots of answers in the docs and this ML, search for them.
> >
> > If you're testing for something that won't be in production before the end
> > of this year, look at Bluestore.
> > Which incidentally has no journal (but can benefit from fast storage for
> > similar reasons, WAL etc) and where people have no to little experience
> > what ratios and sizes are "good".
> >
> > Also with looking at Bluestore ante portas, I wouldn't consider BTRFS or
> > ZFS at this time, too much of a specialty case for the uninitiated.
> >
> > Which leaves you with XFS or EXT4 for immediate deployment needs, with
> > EXT4 being deprecated (for RGW users).
> > I found EXT4 a better fit for our needs (just RBD) in all the years I
> > tested and compared it with XFS, but if you want to go down the path of
> > least resistance and have a large pool of people to share your problems
> > with, XFS is your only choice at this time.  
> Why (how) EXT4 got "deprecated" for RGW use? Also could you give me any 
> comparison between EXT4 and XFS (latency, throughput, etc)?
>
Read the changelogs, release notes, find the ML threads:

"Deprecating ext4 support"
"Better late than never,  some XFS versus EXT4 test results"

Christian

> >
> > If your machines are powerful enough, co-sharing MONs is not an issue.
> >
> > Christian  
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux