Re: Nautilus 14.2.10 mon_warn_on_pool_no_redundancy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I agree, please check for min_size to cover min 1 max 2 configs as we have
done in our software for our users since years. It is important and it can
prevent lot's of issues.

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx


Am Mo., 29. Juni 2020 um 15:06 Uhr schrieb Wout van Heeswijk <wout@xxxxxxxx
>:

> Hi All,
>
> I really like the idea of warning users against using unsafe practices.
>
> Wouldn't it make sense to warn against using min_size=1 instead of size=1.
>
> I've seen data loss happen with size=2 min_size=1 when multiple failures
> occur and write have been done between the failures. Effectively the new
> warning below says "It is not considered safe to run with no
> redundancy". Which is true, but when failure occurs or maintenance is
> executed, with size=2 and min_size=1, as soon as data is written, there
> might not be data redundancy for that newly written data. A failure of
> an OSD at that moment would result in data loss.
>
> Since you cannot run size=1 with min_size > 1, this use-case would also
> be covered.
>
> I understand this has implications for size=2 when executing
> maintenance, but I think most people are not aware of the risks they are
> taking with min_size=1. Those that are aware can suppress the warning.
>
> * Ceph will issue a health warning if a RADOS pool's `size` is set to 1
>    or in other words the pool is configured with no redundancy. This can
>    be fixed by setting the pool size to the minimum recommended value
>    with::
>      ceph osd pool set <pool-name> size <num-replicas>
>    The warning can be silenced with::
>      ceph config set global mon_warn_on_pool_no_redundancy false
>
> --
> kind regards,
>
> Wout
> 42on
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux