Re: Gluster -> Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Big raid isn't great as bricks. If the array does fail, the larger brick means much longer heal times.

My main question I ask when evaluating storage solutions is, "what happens when it fails?"

With ceph, if the placement database is corrupted, all your data is lost (happened to my employer, once, losing 5PB of customer data). With Gluster, it's just files on disks, easily recovered.

If your data is easily replaced, ceph offers copy-on-write which is really handy for things like VM images where you might want to clone 100 simultaneously.


On December 14, 2023 6:57:00 AM PST, Alvin Starr <alvin@xxxxxxxxxx> wrote:
On 2023-12-14 07:48, Marcus Pedersén wrote:
Hi all,
I am looking in to ceph and cephfs and in my
head I am comparing with gluster.

The way I have been running gluster over the years
is either a replicated or replicated-distributed clusters.
Here are my observations but I am far from an expert in either Ceph or Gluster.

Gluster works very well with 2 servers containing 2 big RAID disk arrays.

Ceph on the other hand has MON,MGR,MDS...  that can run on multiple servers, and should be for redundancy, but the OSDs should be lots of small servers with very few disks attached.

It kind of seems that the perfect OSD would be a disk with a raspberry pi attached and a 2.5Gb nic.
Something really cheap and replaceable.

So putting Ceph on 2 big servers with RAID arrays is likely a very bad idea.

I am hoping that someone picks up Gluster because it fits the storage requirements for organizations who start measuring their storage in TB as opposed to EB.

The small setup we have had has been a replicated cluster
with one arbiter and two fileservers.
These fileservers has been configured with RAID6 and
that raid has been used as the brick.

If disaster strikes and one fileserver burns up
there is still the other fileserver and as it is RAIDed
I can loose two disks on this machine before I
start to loose data.

.... thinking ceph and similar setup ....
The idea is to have one "admin" node and two fileservers.
The admin node will run mon, mgr and mds.
The storage nodes will run mon, mgr, mds and 8x osd (8 disks),
with replication = 2.

The problem is that I can not get my head around how
to think when disaster strikes.
So one fileserver burns up, there is still the other
fileserver and from my understanding the ceph system
will start to replicate the files on the same fileserver
and when this is done disks can be lost on this server
without loosing data.
But to be able to have this security on hardware it
means that the ceph cluster can never be more then 50% full
or this will not work, right?
... and it becomes similar if we have three fileservers,
then the cluster can never be more then 2/3 full?

I am not sure if I missunderstand how ceph works or
that ceph works bad on smaller systems like this?

I would appreciate if somebody with better knowledge
would be able to help me out with this!

Many thanks in advance!!

Marcus
När du skickar e-post till SLU så innebär detta att SLU behandlar dina personuppgifter. För att läsa mer om hur detta går till, klicka här <https://www.slu.se/om-slu/kontakta-slu/personuppgifter/>
E-mailing SLU will result in SLU processing your personal data. For more information on how this is done, click here <https://www.slu.se/en/about-slu/contact-slu/personal-data/>
Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux