Re: ceph OSD with 95% full

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>That should be a config option, since reading while writes still block is also a danger. Multiple clients could read the same object, >perform a in-memory change and their write will block.
>Now, which client will 'win' after the full flag has been removed?

>That could lead to data corruption.

Read ops may not do the any issue...but, I agree with you - that write
IO is an issue and its blocked.

>Just make sure you have proper monitoring on your Ceph cluster. At nearfull it goes into WARN and you should act on that.

Yes..

Thanks
Swami

On Tue, Jul 19, 2016 at 4:36 PM, Wido den Hollander <wido@xxxxxxxx> wrote:
>
>> Op 19 juli 2016 om 12:37 schreef M Ranga Swami Reddy <swamireddy@xxxxxxxxx>:
>>
>>
>> Thanks for the correction...so even one OSD reaches to 95% full, the
>> total ceph cluster IO (R/W) will be blocked...Ideally read IO should
>> work...
>
> That should be a config option, since reading while writes still block is also a danger. Multiple clients could read the same object, perform a in-memory change and their write will block.
>
> Now, which client will 'win' after the full flag has been removed?
>
> That could lead to data corruption.
>
> Just make sure you have proper monitoring on your Ceph cluster. At nearfull it goes into WARN and you should act on that.
>
> Wido
>
>>
>> Thanks
>> Swami
>>
>> On Tue, Jul 19, 2016 at 3:41 PM, Wido den Hollander <wido@xxxxxxxx> wrote:
>> >
>> >> Op 19 juli 2016 om 11:55 schreef M Ranga Swami Reddy <swamireddy@xxxxxxxxx>:
>> >>
>> >>
>> >> Thanks for detail...
>> >> When an OSD is 95% full, then that specific OSD's write IO blocked.
>> >>
>> >
>> > No, the *whole* cluster will block. In the OSDMap the flag 'full' is set which causes all I/O to stop (even read!) until you make sure the OSD drops below 95%.
>> >
>> > Wido
>> >
>> >> Thanks
>> >> Swami
>> >>
>> >> On Tue, Jul 19, 2016 at 3:07 PM, Christian Balzer <chibi@xxxxxxx> wrote:
>> >> >
>> >> > Hello,
>> >> >
>> >> > On Tue, 19 Jul 2016 14:23:32 +0530 M Ranga Swami Reddy wrote:
>> >> >
>> >> >> >> Using ceph cluster with 100+ OSDs and cluster is filled with 60% data.
>> >> >> >> One of the OSD is 95% full.
>> >> >> >> If an OSD is 95% full, is it impact the any storage operation? Is this
>> >> >> >> impacts on VM/Instance?
>> >> >>
>> >> >> >Yes, one OSD will impact whole cluster. It will block write operations to the cluster
>> >> >>
>> >> >> Thanks for clarification. Really?? Is this(OSD 95%) full designed to
>> >> >> block write I/O of ceph cluster?
>> >> >>
>> >> > Really.
>> >> > To be more precise, any I/O that touches any PG on that OSD will block.
>> >> > So with a sufficiently large cluster you may have some, few, I/Os still go
>> >> > through as they don't use that OSD at all.
>> >> >
>> >> > That's why:
>> >> >
>> >> > 1. Ceph has the near-full warning (which of course may need to be
>> >> > adjusted to correctly reflect things, especially with smaller clusters).
>> >> > Once you get that warning, you NEED to take action immediately.
>> >> >
>> >> > 2. You want to graph the space utilization of all your OSDs with something
>> >> > like graphite. That allows you to spot trends of uneven data distribution
>> >> > early and thus react early to it.
>> >> > I re-weight (CRUSH re-weight, as this is permanent and my clusters aren't
>> >> > growing frequently) OSDs so they they are at least within 10% of each
>> >> > other.
>> >> >
>> >> > Christian
>> >> >> Because I have around 251 OSDs out which one OSD is 95% full, but
>> >> >> other 250 OSDs not in near full also...
>> >> >>
>> >> >> Thanks
>> >> >> Swami
>> >> >>
>> >> >>
>> >> >> On Tue, Jul 19, 2016 at 2:17 PM, Henrik Korkuc <lists@xxxxxxxxx> wrote:
>> >> >> > On 16-07-19 11:44, M Ranga Swami Reddy wrote:
>> >> >> >>
>> >> >> >> Hi,
>> >> >> >> Using ceph cluster with 100+ OSDs and cluster is filled with 60% data.
>> >> >> >> One of the OSD is 95% full.
>> >> >> >> If an OSD is 95% full, is it impact the any storage operation? Is this
>> >> >> >> impacts on VM/Instance?
>> >> >> >
>> >> >> > Yes, one OSD will impact whole cluster. It will block write operations to
>> >> >> > the cluster
>> >> >> >>
>> >> >> >> Immediately I have reduced the OSD weight, which was filled with 95 %
>> >> >> >> data. After re-weight, data rebalanaced and OSD came to normal state
>> >> >> >> (ie < 80%) with 1 hour time frame.
>> >> >> >>
>> >> >> >>
>> >> >> >> Thanks
>> >> >> >> Swami
>> >> >> >> _______________________________________________
>> >> >> >> ceph-users mailing list
>> >> >> >> ceph-users@xxxxxxxxxxxxxx
>> >> >> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > _______________________________________________
>> >> >> > ceph-users mailing list
>> >> >> > ceph-users@xxxxxxxxxxxxxx
>> >> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >> >> _______________________________________________
>> >> >> ceph-users mailing list
>> >> >> ceph-users@xxxxxxxxxxxxxx
>> >> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >> >>
>> >> >
>> >> >
>> >> > --
>> >> > Christian Balzer        Network/Systems Engineer
>> >> > chibi@xxxxxxx           Global OnLine Japan/Rakuten Communications
>> >> > http://www.gol.com/
>> >> _______________________________________________
>> >> ceph-users mailing list
>> >> ceph-users@xxxxxxxxxxxxxx
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux