Reading the documentation, I see 3 mons, 1 mds and several ods's (both in physical machines..I have understood). Is this true?Dear,we are studying the possibility to migrate our FS in the next year to cephfs. I know that it is not prepare for production environments yet, but we are planning to play with it in the next months deploying a basic testbed.
On the other hand I do not understand the fail-over mechanism for clients when have mounted a FS, looking at documentation :
ceph-fuse [ -m monaddr:port ] mountpoint [ fuse options ]You have to specify (hardcode) the "monaddr:port", if this mon (ip) is down, what happen, Do you lost the fs on that node?, or there is a generic dns-rrd implementation for mons??
You can actually specify a list of mons, and once connected to any mon the client fetches the full list and will reconnect to them if the one it is currently talking to goes down.
Is there any implementation for "tiering" or "HSM" at software level, I mean, can I mix different type of disk (ssds and SATA) on diferent pools, and migrate data between them automatically (most used, size, last time access)
-Greg
Please could anyone clarify to me this point?Regards, I--
####################################
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
####################################
Bertrand Russell:
"El problema con el mundo es que los estúpidos están seguros de todo y los inteligentes están llenos de dudas"
--
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com