Re: All pools full after one OSD got OSD_FULL state

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 29, 2018 at 1:17 AM, Jakub Jaszewski <jaszewski.jakub@xxxxxxxxx> wrote:
Many thanks Mike, that justifies stopped IOs. I've just finished adding new disks to cluster and now try to evenly reweight OSD  by PG.

May I ask you two more questions? 
1. As I was in a hurry I did not check if only write ops were blocked or reads from the pools as well, do you know that maybe ?

i don't know for certain but i think reads are still processed normally.
 
2. All OSDs are shared with all pools in our cluster. What in the case when each pool has its dedicated OSDs, does one FULL OSD block only one pool or whole cluster ? 

i've not tested this one for sure but my understanding of this is that pools that use separate sets of osds would still be able to do writes. assuming they haven't filled up as well. if you have multiple pools that use the full osd, then at least those pools would be blocked. this should be visible from 'ceph df' since it would show different amounts of MAX AVAIL for the pools using different sets of osds.

Thanks!

np

mike 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux