Multiple storage sites for disaster recovery and/or active-active failover

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I'm new to Ceph, and currently evaluating a deployment strategy.

I'm planning to create a sort of home-hosting (web and compute hosting,
database, etc.), distributed on various locations (cities), extending
the "commodity hardware" concept to "commodity data-center" and
"commodity connectivity". Anything is expected to fail (disks, servers,
switches, routers, Internet fibre links, power, etc.), but the overall
service will still work.
Reaching the services from the outside relies on DNS tuning (add/remove
location when they appears/fail, with a low TTL) and possibly proxies
(providing faster response time by routing traffic through a SPOF...).

Hardware will be recent Xeon D-1500 Mini-ITX motherboards with 2 or 4
Ethernet ports, 6 SATA and 1 NVMe ports, and no PCIe extension cards
(typically Supermicro X10SDV or AsrockRack D1541D4I). A dozen servers
are built in custom enclosures at each location with 12V redundant power
supplies, switches, monitoring, etc.
http://www.supermicro.com/products/motherboard/Xeon/D/X10SDV-F.cfm
http://www.asrockrack.com/general/productdetail.asp?Model=D1541D4I
http://www.pulspower.com/products/show/product/detail/cps20121/

I am wondering how Ceph could provide the location-to-location
fail-over, possibly adding an active-active feature. I'm planning to use
CephFS (shared storage) and RBD (VMs).
I'm not sure yet how to deal with Postgres and other specific services
replication.

Say I have a CephFS at location A, in read-write use at the moment,
serving HTTP requests via the websites/apps/etc. It should replicate
it's data to location B, which could be in standby or read-only mode
(preferably), potentially serving HTTP requests (provided there are no
filesystem writes from those requests). The link between location A and
B (and potentially C, D, etc) is the same Internet fibre link from the
local ISP: not fault-tolerant, subject to lantency, etc.
When the fibre link or power supply fails, the other locations should
notice, change the DNS settings (to disable the requests going to
location A), switch the CephFS to active or read-write mode, and
continue serving requests.
I can handle a few minutes of HTTP downtime, but data should always be
accessible from somewhere, possibly with a few minutes loss but no
crash.

As I read the docs, CephFS and RBD do not handle that situation. RadosGW
have a sort of data replication between clusters and/or pools.
I'm not sure if that problem is solved by the CRUSH rulesets, which
would have to be fine-tuned (say location A is a sort of "room" in the
Crush hierarchy, if I have 2 enclosures with 10 servers in the same
location, those enclosures are "racks", etc.)
Will CRUSH handle latency, failed links, failed power, etc?
How does it solve the CephFS need (active-standby or active-active)?

Nota: I'm also evaluating DRBD, which I know quite well, which have
evolved since my last setup, which does not solve the same low-level
problems, but may also be used in my case, obviously not at the same
scale.

Thanks in advance, for your reading patience and answers!

-- 
Nicolas Huillard
Associé fondateur - Directeur Technique - Dolomède

nhuillard@xxxxxxxxxxx
Fixe : +33 9 52 31 06 10
Mobile : +33 6 50 27 69 08
http://www.dolomede.fr/

http://www.350.org/

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux