Re: ceph OSD with 95% full

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> Using ceph cluster with 100+ OSDs and cluster is filled with 60% data.
>> One of the OSD is 95% full.
>> If an OSD is 95% full, is it impact the any storage operation? Is this
>> impacts on VM/Instance?

>Yes, one OSD will impact whole cluster. It will block write operations to the cluster

Thanks for clarification. Really?? Is this(OSD 95%) full designed to
block write I/O of ceph cluster?
Because I have around 251 OSDs out which one OSD is 95% full, but
other 250 OSDs not in near full also...

Thanks
Swami


On Tue, Jul 19, 2016 at 2:17 PM, Henrik Korkuc <lists@xxxxxxxxx> wrote:
> On 16-07-19 11:44, M Ranga Swami Reddy wrote:
>>
>> Hi,
>> Using ceph cluster with 100+ OSDs and cluster is filled with 60% data.
>> One of the OSD is 95% full.
>> If an OSD is 95% full, is it impact the any storage operation? Is this
>> impacts on VM/Instance?
>
> Yes, one OSD will impact whole cluster. It will block write operations to
> the cluster
>>
>> Immediately I have reduced the OSD weight, which was filled with 95 %
>> data. After re-weight, data rebalanaced and OSD came to normal state
>> (ie < 80%) with 1 hour time frame.
>>
>>
>> Thanks
>> Swami
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux