Re: When one OSD is full the whole cluster switch to r/o ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

osd.0 and osd.2 size 500 MB, osd.1 and osd.3 size 1 TB. I known I need
to set up weight correctly, but it was a test only.

You need to set up the weights correctly, even though it is only a test.

My question: this is the correct works?

Yes.

If I have 1000 OSD and only one
is reach full, the whole cluster go to read-only?

No, that is not the case.

Per my understanding the files will be mapped to objects in pgs, that are again mapped to individual osds. If you want to write to a pg that is mapped to an osd that is full, then that write cannot succeed.

However the whole cluster does not go "read-only", only the the pgs that are actually mapped to the osds that are full.

--
Jens Kristian Søgaard, Mermaid Consulting ApS,
jens@xxxxxxxxxxxxxxxxxxxx,
http://wwww.mermaidconsulting.com/


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux