cepfs: Minimal deployment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear,
  we are studying the possibility to migrate our FS in the next year to cephfs. I know that it is not prepare for production environments yet, but we are planning to play with it in the next months deploying a basic testbed.
  Reading the documentation, I see 3 mons, 1 mds and several ods's (both in physical machines..I have understood). Is this true?
  On the other hand I do not understand the fail-over mechanism for clients when have mounted a FS, looking at documentation :

   ceph-fuse [ -m monaddr:port ] mountpoint [ fuse options ]
You have to specify (hardcode) the "monaddr:port", if this mon (ip) is down, what happen, Do you lost the fs on that node?, or there is a generic dns-rrd implementation for mons??

  Is there any implementation for "tiering" or "HSM" at software level, I mean, can I mix different type of disk (ssds and SATA) on diferent pools, and migrate data between them automatically (most used, size, last time access) 

Please could anyone clarify to me this point?

Regards, I

--
####################################
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
####################################
Bertrand Russell:
"El problema con el mundo es que los estúpidos están seguros de todo y los inteligentes están llenos de dudas"
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux