Hello, On Sun, 27 Jul 2014 19:02:11 +0000 Edward Huyer wrote: > Ceph has a default pool size of 3. Is it a bad idea to run a pool of > size 2? What about size 2 min_size 1? > min_size 1 is sensible, 2 obviously won't protect you against dual disk failures. Which happen and happen with near certainty once your cluster gets big enough. > I have a cluster I'm moving data into (on RBDs) that is full enough with > size 3 that I'm bumping into nearfull warnings. Part of that is because > of the amount of data, part is probably because of suboptimal tuning > (Proxmox VE doesn't support all the tuning options), and part is > probably because of unbalanced drive distribution and multiple drive > sizes. > > I'm hoping I'll be able to solve the drive size/distribution issue, but > in the mean time, what problems could the size and min_size changes > create (aside from the obvious issue of fewer replicas)? I'd address all those issues (setting the correct weight for your OSDs). Because it is something you will need to do anyway down the road. Alternatively add more nodes and OSDs. While setting the replica down to 2 will "solve" your problem, it will also create another one besides the reduced redundancy: It will reshuffle all your data, slowing down your cluster (to the point of becoming unresponsive if it isn't designed and configured well). Murphy might take those massive disk reads and writes as a clue to provide you with a double disk failure as well. ^o^ Christian > _______________________________________________ ceph-users mailing list > ceph-users at lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Christian Balzer Network/Systems Engineer chibi at gol.com Global OnLine Japan/Fusion Communications http://www.gol.com/