Re: Worst thing that can happen if I have size= 2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

El 4/2/21 a las 11:56, Frank Schilder escribió:
- three servers
- three monitors
- 6 osd (two per server)
- size=3 and min_size=2
This is a set-up that I would not run at all. The first one is, that ceph lives on the law of large numbers and 6 is a small number. Hence, your OSD fill-up due to uneven distribution.

What comes to my mind is a hyper-converged server with 6+ disks in a RAID10 array, possibly with a good controller with battery-powered or other non-volatile cache. Ceph will never beat that performance. Put in some extra disks as hot-spare and you have close to self-healing storage.

Such a small ceph cluster will inherit all the baddies of ceph (performance, maintenance) without giving any of the goodies (scale-out, self-healing, proper distributed raid protection). Ceph needs size to become well-performing and pay off the maintenance and architectural effort.


It's funny that we have multiple clusters similar to this, and we and our customers couldn't be happier. Just use a HCI solution (like for example Proxmox VE, but there are others) to manage everything.

Maybe the weakest thing in that configuration is having 2 OSDs per node; osd nearfull must be tuned accordingly so that no OSD goes beyond about 0.45, so that in case of failure of one disk, the other OSD in the node has enough space for healing replication.

When deciding min_size, one has to balance availability (failure during maintenance of one node with min_size=2) vs risk of data loss (min_size=1).

Not everyone needs to max SSD disk IOPS; having a decent, HA setup can be of much value...

Cheers


--
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project

Tel. +34 943 569 206 | https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun

https://www.youtube.com/user/CANALBINOVO/
https://www.linkedin.com/company/37269706/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux