Re: ceph and raid 1 replication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Hi every one,
> I'm new to ceph and I'm still studying it.
> In my company we decided to test ceph for possible further implementations.
>
> Although I  undestood its capabilities I'm still doubtful about how to
> setup replication.

Default settings in ceph will give you replication = 3, which is like
RAID-1 but with three drives having the same data.
It is just not made on per-disk basis, but all stored data will have
two extra copies on other drives (on separate hosts)

> Once implemented in production I can accept a little lacking of
> performance in
> favor of stability and night sleep, hence, if in testing area I can
> introduce ceph as network storage,
> I'd like to replicate some osds' drives as I'd do with raid 1 once in
> production.

As with zfs (and btrfs and other storage solutions) you are most often
best served by handing over all drives raw as they are, and let the
storage system handle the redundancy on a higher level and not build
raid-1s and hand those over to the storage.

> The goal would be hosting data for kubernetes storage classes.
> so questions are:
>
> 1) what do you think about this kind of solution

Bad idea

> 2) How can I setup full replication between osds?

Not really needed. Go with the defaults, allow ceph to place 3 copies
of each piece of data spread out on three or more separate OSD hosts
and move on to the interesting parts of actually using the storage
instead of trying to "fix" something which isn't broken by default.

Ceph will not make full copies of whole OSDs, rather pools will be
made up of many PGs and each PG will be replicated as needed to give
you three copies of each, just not to the same OSDs.
It will also auto-repair to other drives and hosts and auto-balance
data, which a raid-1 set would not do unless you have unused hot
spares waiting for disasters.


-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux