Proplem about capacity when mount using CephFS?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi everyone.

I have 83 osds, and every osds have
same 2TB, (Capacity sumary is 166TB)
I'm using replicate 3 for pools ('data','metadata').

But when mounting Ceph filesystem from somewhere (using: mount -t ceph Monitor_IP:/ /ceph -o name=admin,secret=xxxxxxxxxx")
then capacity sumary is showed "160TB"?, I used replicate 3 and I think that it must return 160TB/3=50TB?

Filesystem                Size  Used Avail Use% Mounted on
192.168.32.90:/    160T  500G  156T   1%  /tmp/ceph_mount

Please, explain this  help me?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux