Ceph (Luminous) shows total_space wrong

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

 

I successfully managed to work with ceph jewel. Want to try luminous.

 

I also set experimental bluestore while creating osds. Problem is, I have 20x3TB hdd in two nodes and i would expect 55TB usable (as on jewel) on luminous but i see 200GB. Ceph thinks I have only 200GB space available in total. I see all osds are up and in.

 

20 osd up; 20 osd in. 0 down.

 

Ceph -s shows HEALTH_OK. I have only one monitor and one mds. (1/1/1) and it is up:active.

 

ceph osd tree gave me all OSDs in nodes are up and results are 1.0000... I checked via df -h but all disks ahows 2.7TB. Basically something is wrong. Same settings and followed schema on jewel is successful except luminous.

 

What might it be?

 

What do you need to know to solve this problem? Why ceph thinks I have 200GB space only?

 

Thanks,

Gencer.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux