Re: Min Size equal to Replicated Size Risks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Excellent!

Thank youd David and Jack for your time!

Regards,

G.

The pool will not actually go read only. All read and write requests
will block until both osds are back up. If I were you, I would use
min_size=2 and change it to 1 temporarily if needed to do maintenance
or troubleshooting where down time is not an option.

On Thu, Feb 22, 2018, 5:31 PM Georgios Dimitrakakis  wrote:

 All right! Thank you very much Jack!

 The way I understand this is that its not necessarily a bad
thing. I
 mean as long as it doesnt harm any data or
 cannot cause any other issue.

 Unfortunately my scenario consists of only two OSDs therefore
there is
 a replication factor of 2 and min_size=1.

 What I am trying to figure out is if its more dangerous to have
 min_size=2 rather than 1 in the above scenario and if it gives me
any
 benefits.

 I am already aware of the *golden* rule about the minimum number
of
 replicas (3) but the cluster will be reformed soon and until then
I
 would like to know if its better to go with min_size=2 or not.

 Regards,

 G.

> If min_size == size, a single OSD failure will place your pool
read
> only
>
> On 02/22/2018 11:06 PM, Georgios Dimitrakakis wrote:
>> Dear all,
>>
>> I would like to know if there are additional risks when running
CEPH
>> with "Min Size" equal to "Replicated Size" for a given pool.
>>
>> What are the drawbacks and what could be go wrong in such a
>> scenario?
>>
>> Best regards,
>>
>> G.
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx [1]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [2]
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx [3]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [4]

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx [5]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [6]


Links:
------
[1] mailto:ceph-users@xxxxxxxxxxxxxx
[2] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[3] mailto:ceph-users@xxxxxxxxxxxxxx
[4] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[5] mailto:ceph-users@xxxxxxxxxxxxxx
[6] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[7] mailto:giorgis@xxxxxxxxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux