Re: HA Filesystem mode (MON, OSD, MDS) with Ceph and HAof MDS daemon.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Since your app is an Apache / php app is it possible for you to reconfigure the app to use S3 module rather than a posix open file()?  Then with Ceph drop CephFS and configure Civetweb S3 gateway?  You can have "active-active" endpoints with round robin dns or F5 or something.  You would also have to repopulate objects into the rados pools.

Also increase that size parameter to 3.  ;-)

Lots of work for active-active but the whole stack will be much more resilient coming from some with a ClearCase / NFS / stale file handles up the wazoo background



On Mon, Jun 12, 2017 at 10:41 AM, Daniel Carrasco <d.carrasco@xxxxxxxxx> wrote:
2017-06-12 16:10 GMT+02:00 David Turner <drakonstein@xxxxxxxxx>:
I have an incredibly light-weight cephfs configuration.  I set up an MDS on each mon (3 total), and have 9TB of data in cephfs.  This data only has 1 client that reads a few files at a time.  I haven't noticed any downtime when it fails over to a standby MDS.  So it definitely depends on your workload as to how a failover will affect your environment.

On Mon, Jun 12, 2017 at 9:59 AM John Petrini <jpetrini@xxxxxxxxxxxx> wrote:
We use the following in our ceph.conf for MDS failover. We're running one active and one standby. Last time it failed over there was about 2 minutes of downtime before the mounts started responding again but it did recover gracefully.

[mds]
max_mds = 1
mds_standby_for_rank = 0
mds_standby_replay = true

___

John Petrini

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Thanks to both.
Just now i'm working on that because I needs a very fast failover. For now the tests give me a very fast response when an OSD fails (about 5 seconds), but a very slow response when the main MDS fails (I've not tested the real time, but was not working for a long time). Maybe was because I created the other MDS after mount, because I've done some test just before send this email and now looks very fast (i've not noticed the downtime).

Greetings!!


--
_________________________________________

      Daniel Carrasco Marín
      Ingeniería para la Innovación i2TIC, S.L.
      Tlf:  +34 911 12 32 84 Ext: 223
      www.i2tic.com
_________________________________________

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux