Re: What is the meaning of size and min_size for erasure-coded pools?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You talked about "using default settings wherever possible"... Well, Ceph's default settings everywhere they exist, is to not allow you to write while you don't have at least 1 more copy that you can lose without data loss.  If your bosses require you to be able to lose 2 servers and still serve customers, then tell them that Ceph requires you to have 3 parity copies of the data.

Why do you want to change your one and only copy of the data while you already have a degraded system?  And not just a degraded system, but a system where 2/5's of your servers are down... That sounds awful, terrible, and just plain bad.

To directly answer your question about min_size, min_size does not affect where data is placed.  It only affects when a PG claims to not have enough copies online to be able to receive read or write requests.

On Tue, May 8, 2018 at 7:47 AM Janne Johansson <icepic.dz@xxxxxxxxx> wrote:
2018-05-08 1:46 GMT+02:00 Maciej Puzio <mkp37215@xxxxxxxxx>:
Paul, many thanks for your reply.
Thinking about it, I can't decide if I'd prefer to operate the storage
server without redundancy, or have it automatically force a downtime,
subjecting me to a rage of my users and my boss.
But I think that the typical expectation is that system serves the
data while it is able to do so.

If you want to prevent angry bosses, you would have made 10 OSD hosts
or some other large number so that ceph cloud place PGs over more places
so that 2 lost hosts would not impact so much, but also so it can recover into
each PG into one of the 10 ( minus two broken minus the three that already
hold data you want to spread out) other OSDs and get back into full service
even with two lost hosts.

It's fun to test assumptions and "how low can I go", but if you REALLY wanted
a cluster with resilience to planned and unplanned maintenance,
you would have redundancy, just like that Raid6 disk box would
presumably have a fair amount of hot and perhaps cold spares nearby to kick
in if lots of disks started go missing. 

--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux