Re: Ceph instead of RAID

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'd just like to echo what Wolfgang said about ceph being a complex system.

I initially started out testing ceph with a setup much like yours. And while it overall performed ok, it was not as good as sw raid on the same machine.

Also, as Mark said you'll have at very best half write speeds because of how the journaling works if you do larger continuous writes.

Ceph really shines with multiple servers & multiple concurrency.

My testmachine was running for ½ a year+ (going from argonaut -> cuttlefish) and in that process I came to realize that mixing types of disk (and size) was a bad idea (some enterprise SATA, some fast desktop and some green disks) - as speed will be determined by the slowest drive in your setup (that's why they're advocating using similar hw if at all possible I guess).

I also experienced all the challenging issues having to deal with a very young technology; osds suddenly refusing to start, pg's going into various incomplete/down/inconsistent states, monitor leveldb running full, monitor dying at weird times and well - I think it is good for a learning experience, but like Wolfgang said I think it is too much hassle for too little gain when you have something like raid10/zfs around.

But, by all means, don't let us discourage you if you want to go this route - ceph's unique self-healing ability was what drew me into running a single machine in the first place.

Cheers,
Martin
 


On Tue, Aug 13, 2013 at 9:32 AM, Wolfgang Hennerbichler <wolfgang.hennerbichler@xxxxxxxxxxxxxxxx> wrote:


On 08/13/2013 09:23 AM, Jeffrey 'jf' Lim wrote:
>>> Anyway, I thought what if instead of RAID-10 I use ceph? All 6 disks will be local, so I could simply create
>>> 6 local OSDs + a monitor, right? Is there anything I need to watch out for in such configuration?
>>
>> You can do that. Although it's nice to play with and everything, I
>> wouldn't recommend doing it. It will give you more pain than pleasure.
>
> How so? Care to elaborate?

Ceph is a complex system, built for clusters. It does some stuff in
software that is otherwhise done in hardware (raid controllers). The
nature of the complexity of a cluster system is a lot of overhead
compared to a local raid [whatever] system, and latency of disk i/o will
naturally suffer a bit. An OSD needs about 300 MB of RAM (may vary on
your PGs), times 6 is a "waste" of nearly 2 GB of RAM (compared to a
local RAID). Also ceph is young, and it does indeed have some bugs. RAID
is old, and very mature. Although I rely on ceph on a productive
cluster, too, it is way harder to maintain than a simple local raid.
When a disk fails in ceph you don't have to worry about your data, which
is a good thing, but you have to worry about the rebuilding (which isn't
too hard, but at least you need to know SOMETHING about ceph), with
(hardware) RAID you simply replace the disk, and it will be rebuilt.

Others will find more reasons why this is not the best idea for a
production system.

Don't get me wrong, I'm a big supporter of ceph, but only for clusters,
not for single systems.

wogri

> -jf
>
>
> --
> He who settles on the idea of the intelligent man as a static entity
> only shows himself to be a fool.
>
> "Every nonfree program has a lord, a master --
> and if you use the program, he is your master."
>     --Richard Stallman
>


--
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Technologies
RISC Software GmbH
A company of the Johannes Kepler University Linz

IT-Center
Softwarepark 35
4232 Hagenberg
Austria

Phone: +43 7236 3343 245
Fax: +43 7236 3343 250
wolfgang.hennerbichler@xxxxxxxxxxxxxxxx
http://www.risc-software.at
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux