When one OSD is full the whole cluster switch to r/o ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I've got a mini test cluster with two host and 2-2 osd in each.

# id    weight    type name    up/down    reweight
-1   2    root default
-3   2        rack unknownrack
-2   1              host ceph1
0    1                  osd.0    up    1   
1    1                  osd.1    up    1
-4   1               host ceph2
3    1                  osd.3    up    1
2    1                  osd.2    up    1


osd.0 and osd.2 size 500 MB, osd.1 and osd.3 size 1 TB. I known I need to set up weight correctly, but it was a test only.

I wrote lot of data to cluster (via cephfs to default data pool with 2 repl). When osd.0 and osd.2 reach 100% full (on physical device) I can not write more data to cluster, but osd.1 and osd.3 has got enough free space to do 2 replica.

My question: this is the correct works? If I have 1000 OSD and only one is reach full, the whole cluster go to read-only? This is my first try with Ceph so please do not stress if my question is too lame.

I'm using Ubuntu Precise x86_64 (12.04) and ceph-0.56.3.

Thank you,
Regards
Michael
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux