Dear,
we are studying the possibility to migrate our FS in the next year to cephfs. I know that it is not prepare for production environments yet, but we are planning to play with it in the next months deploying a basic testbed. On the other hand I do not understand the fail-over mechanism for clients when have mounted a FS, looking at documentation :
ceph-fuse [ -m monaddr:port ] mountpoint [ fuse options ]
ceph-fuse [ -m monaddr:port ] mountpoint [ fuse options ]
You have to specify (hardcode) the "monaddr:port", if this mon (ip) is down, what happen, Do you lost the fs on that node?, or there is a generic dns-rrd implementation for mons??
Is there any implementation for "tiering" or "HSM" at software level, I mean, can I mix different type of disk (ssds and SATA) on diferent pools, and migrate data between them automatically (most used, size, last time access)
Please could anyone clarify to me this point?
Regards, I
--
####################################
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
####################################
Bertrand Russell:
"El problema con el mundo es que los estúpidos están seguros de todo y los inteligentes están llenos de dudas"
####################################
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
####################################
Bertrand Russell:
"El problema con el mundo es que los estúpidos están seguros de todo y los inteligentes están llenos de dudas"
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com