Re: What is the meaning of size and min_size for erasure-coded pools?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





2018-05-08 1:46 GMT+02:00 Maciej Puzio <mkp37215@xxxxxxxxx>:
Paul, many thanks for your reply.
Thinking about it, I can't decide if I'd prefer to operate the storage
server without redundancy, or have it automatically force a downtime,
subjecting me to a rage of my users and my boss.
But I think that the typical expectation is that system serves the
data while it is able to do so.

If you want to prevent angry bosses, you would have made 10 OSD hosts
or some other large number so that ceph cloud place PGs over more places
so that 2 lost hosts would not impact so much, but also so it can recover into
each PG into one of the 10 ( minus two broken minus the three that already
hold data you want to spread out) other OSDs and get back into full service
even with two lost hosts.

It's fun to test assumptions and "how low can I go", but if you REALLY wanted
a cluster with resilience to planned and unplanned maintenance,
you would have redundancy, just like that Raid6 disk box would
presumably have a fair amount of hot and perhaps cold spares nearby to kick
in if lots of disks started go missing. 

--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux