Re: When one OSD is full the whole cluster switch to r/o ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 02/20/2013 11:18 AM, Jens Kristian Søgaard wrote:
Hi,

osd.0 and osd.2 size 500 MB, osd.1 and osd.3 size 1 TB. I known I need
to set up weight correctly, but it was a test only.

You need to set up the weights correctly, even though it is only a test.

My question: this is the correct works?

Yes.

If I have 1000 OSD and only one
is reach full, the whole cluster go to read-only?

No, that is not the case.

Per my understanding the files will be mapped to objects in pgs, that
are again mapped to individual osds. If you want to write to a pg that
is mapped to an osd that is full, then that write cannot succeed.


I'm not sure about that.

When the flag "full" is set in the osdmap it's my understanding that the whole cluster will refuse writes.

Ref: http://ceph.com/docs/master/rados/operations/troubleshooting-osd/#the-cluster-has-no-free-disk-space

Wido

However the whole cluster does not go "read-only", only the the pgs that
are actually mapped to the osds that are full.



--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux