Hello,
--
I'm very new on Ceph, so maybe this question is a noob question.
We have an architecture that have some web servers (nginx, php...) with a common File Server through NFS. Of course that is a SPOF, so we want to create a multi FS to avoid future problems.
We've already tested GlusterFS, but is very slow reading small files using the oficial client (from 600ms to 1700ms to read the Doc page), and through NFS Ganesha it fails a lot (permissions errors, 404 when the file exists...).
The next we're trying is Ceph that looks very well and have a good performance even with small files (near to NFS performance: 90-100ms to 100-120ms), but on some tests that I've done, it stop working when an OSD is down.
My test architecture are two servers with one OSD and one MON each, and a third with a MON and an MDS. I've configured the cluster to have two copies of every PG (just like a RAID1) and all looks fine (health OK, three monitors...).
My test client also works fine: it connects to the cluster and is able to server the webpage without problems, but my problems comes when an OSD goes down. The cluster detects that is down, it shows like needs more OSD to keep the two copies, designates a new MON and looks like is working, but the client is unable to receive new files until I power on the OSD again (it happens with both OSD).
My question is: Is there any way to say Ceph to keep serving files even when an OSD is down?
My other question is about MDS:
Multi-MDS enviorement is stable?, because if I have multiple FS to avoid SPOF and I only can deploy an MDS, then we have a new SPOF...
This is to know if maybe i need to use Block Devices pools instead File Server pools.
Thanks!!! and greetings!!
_________________________________________
Daniel Carrasco Marín
Ingeniería para la Innovación i2TIC, S.L.
Tlf: +34 911 12 32 84 Ext: 223
www.i2tic.com
_________________________________________
Ingeniería para la Innovación i2TIC, S.L.
Tlf: +34 911 12 32 84 Ext: 223
www.i2tic.com
_________________________________________
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com